AI Announcements WashU News Home Page Feature IT Staff Event IT Staff News News & Events

OCIO Joint Message on the use of generative AI from CISO Chris Shull and CTO Greg Hart 

Dear Members of the WashU Community, 
  
There has been so much discussion recently surrounding the use of Generative Artificial Intelligence (AI), which is AI capable of producing content such as text, images, music, videos, code, or other media based on commands or prompts. Examples of such AI tools include Machine Learning (ML) and Large Language Models (LLMs.) Recently there has been a great deal of interest and exploration of these new tools across all disciplines and among all organizations. Because this field is evolving so quickly, this email serves two purposes. First, we would like to offer a brief explanation of what each of these tools are. Second, we are introducing some guidelines on the use of generative AI, such as OpenAI’s, ChatGPT, Google Bard, and many others. 

AI is a machine’s ability to perform a task that would normally require human intelligence. ML and LLMs leverage AI to give machines the ability to adapt, or to compile massive amounts of information used to replicate human writing, speech, and behavior.  

The university supports and encourages the responsible and secure exploration of AI tools. When using any of these tools, especially open-source, non-protected AI tools, it is vitally important to keep information security and data privacy, compliance, copyright, and academic integrity in mind. Currently, we are exploring privacy compliant LLM solutions. Those with a particular need in this area, or questions, are encouraged to reach out for a consultation via aiquestions@wustl.edu.  

  
It is clear AI is a rapidly evolving technology. The university is tracking developments and adapting plans to support the community in secure, compliant, and privacy-respecting ways. Guidelines will undergo updates to match advancements and innovation with proper safety controls. 
  
Initial guidelines for use of AI tools: 

  • Be mindful not to share sensitive information: Please do not enter confidential or protected data or information, including non-public research data, into publicly available or vendor-enabled AI tools. Information shared with public AI tools is usually claimed to be the property of the vendor, is not considered private, and could expose proprietary or sensitive information to unauthorized parties. It is the user’s responsibility to protect confidential data. 
  • These tools can be inaccurate: Each individual is responsible for any content that is produced or published containing AI-generated material. Note that AI tools sometimes “hallucinate,” generating content that can be highly convincing, but inaccurate, misleading, or entirely fabricated. Furthermore, it may contain copyrighted material. It is imperative that all AI-generated content be reviewed carefully for correctness before submission or publication. It is the user’s responsibility to verify everything. 
  • Adhere to current academic integrity policies: Review university, school and department handbooks and policies for student and faculty. Schools will be developing and updating their policies as we learn more about AI tools. Faculty members should make clear to students they are teaching and advising about policies on the permitted uses, if any, of AI in classes and on academic work. Students are also encouraged to ask their instructors for clarification about these policies as needed. 
  • Be alert for AI-enabled phishing: AI has made it easier for malicious actors to create sophisticated scams at a far greater scale. Continue to follow security best practices and report suspicious messages via the Phish Report button in Outlook or to phishing@wustl.edu
  • Connect with WashU IT before procuring generative AI tools: The university is working to ensure that tools procured on behalf of WashU have the appropriate privacy and security protections. 
  • Existing tools that add or expand AI capabilities. 
  • New purchases of vendor AI tools. 

It is important to note that these guidelines are not new university policy; rather, they leverage existing university policies. Please look for institution-specific guidance to follow this communication. We look forward to working with you in the spirit of collaboration and innovation. 

 
Sincerely, 

Chris Shull 

Chief Information Security Officer 

Gregory Hart 
Chief Technology Officer