Generative AI is a type of artificial intelligence that can learn from and mimic large amounts of data to create content such as text, images, music, videos, code, and more, based on inputs or prompts.

The university supports and encourages the responsible and secure exploration of AI tools. When using any publicly accessible, non-protected AI tools, it is vitally important that you do not enter any Washington University or secure data, including deidentified healthcare data of any kind, into these platforms.


Washington University ChatGPT Beta

Version: OpenAI’s ChatGPT 3.5 Turbo

Use this tool when you want a secure sandbox where you can use sensitive data and WashU intellectual property. 

This isolated instance of ChatGPT is compliant for use with sensitive data and WashU intellectual property, including information protected under HIPAA and FERPA. Due to the isolated nature, this tool does not pull in new information from the web and will be approximately six months behind the latest version of the large language model. 

This tool is available in Beta form in order to provide a secure GPT environment to the WashU community for research, operations and education as quickly as possible. As such, it is not yet mobile friendly and users may experience limited capacity or constraints. We welcome feedback on how we can improve the user experience as we continue to optimize the tool.  

Microsoft Copilot

LLM Version: OpenAI’s ChatGPT 4

Use this tool with non-sensitive data and when you want to connect to the most current web information.

As of February 2024, all students, in addition to faculty and staff, now have access to Microsoft Copilot (formerly called Bing Chat Enterprise) at when logged in to an institutional Microsoft account.  As with any external tool, it is important to understand that personally identifiable, confidential or sensitive information should not be entered into Microsoft Copilot as it does not meet HIPAA, FERPA or similar compliance requirements.  

Microsoft Copilot for
Microsoft 365 

Microsoft Copilot for Microsoft 365 combines the power of large language models (LLMs) with your organization’s data. It works alongside popular Microsoft 365 apps such as Word, Excel, PowerPoint, Outlook, Teams, and more, providing real-time intelligent assistance, enabling users to enhance their creativity, productivity, and skills. 

While Microsoft Copilot for Microsoft 365 is not yet available to WashU faculty, staff and students, a small testing group is evaluating the product’s potential utility for those groups.  

No expansion of the testing group is yet planned but making cutting-edge tools available to you quickly – and ensuring that we can support their secure, responsible and effective use – remains a key mission of both WashU IT and Digital Transformation. 

WashU ChatGPT FAQs

Who can use Washington University ChatGPT Beta?

WashU facutly, staff and students can access the sandbox using their WUSTL Key.

Why is this labeled a Beta? What can I expect during the Beta phase?

It is important to provide faculty, staff and students with swift access to this powerful emergent technology in a secure sandbox environment that protects sensitive data…

What version is this? How similar/different is this tool to the publically available version of ChatGPT? Is it trained on WashU-specific datasets?

WashU’s instance of ChatGPT uses OpenAI’s ChatGPT Version 3.5 Turbo. Currently, it is not custom-trained on any WashU-specific information or datasets…


Artificial Intelligence (AI)

The theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.

– Oxford Languages

Large Language Models (LLMs)

A specialized type of artificial intelligence (AI) that has been trained on vast amounts of text to understand existing content and generate original content.

– Gartner

Machine Learning (ML)

The use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and draw inferences from patterns in data.

– Oxford Languages

Natural-Language Processing (NLP)

Involves the ability to turn text or audio speech into encoded, structured information, based on an appropriate ontology. The structured data may be used simply to classify a document, as in “this report describes a laparoscopic cholecystectomy,” or it may be used to identify findings, procedures, medications, allergies and participants.

– Gartner.


It is the user’s responsibility to protect sensitive data and verify content when using generative AI tools.

Be Mindful Not to Share Sensitive Information

Do not enter confidential or protected data or information, including non-public research data, into publicly available or vendor-enabled AI tools.

Information shared with public AI tools:

  • Is not considered private.
  • May be added to the tool’s knowledge base and provided to other users.
  • Is usually claimed to be the property of the vendor.

These Tools Can be Inaccurate

Each individual is responsible for any content that is produced or published containing AI-generated material.

  • AI tools sometimes “hallucinate,” generating content that can be highly convincing, but inaccurate, misleading, or entirely fabricated.
  • It may contain copyrighted material.
  • All AI-generated content should be reviewed carefully for correctness and cited properly before submission or publication.

Adhere to Current Academic Integrity Policies

Review university, school, and department handbooks and policies.

  • Schools will be developing and updating their policies as we learn more about AI tools.
  • Faculty members should teach and advise students about policies on the permitted uses of AI in classes and on academic work.
  • Students are encouraged to ask their instructors for clarification about these policies.
  • AI may contribute intentional and unintended forms of plagiarism and falsification of data.

Be Alert for AI-Enabled Phishing

AI has made it easier for malicious actors to create sophisticated scams at a far greater scale. Continue to follow security best practices and report suspicious messages via the Phish Report button in Outlook or to

Contact IT When Procuring Generative AI Tools or Adding AI Functionality in Existing Applications

The university is working to ensure that tools procured on behalf of WashU have the appropriate privacy and security protections.

  • If you have procured or are considering procuring AI tools, contact WashU IT at and provide the following:
    • Purpose
    • Data being used
    • The product or service to be used
    • Compliance with the published guidelines
    • Contact information



Citation method is dependent on academic field.

Researchers should check each journal’s specific guidelines and provide proper attribution and transparency in their manuscripts when they use AI in their research (PMID: 36697395). 

Digital Intelligence & Innovation Accelerator

The DI2 Accelerator, born from Washington University’s Here & Next strategic plan for Digital Transformation, represents a bold vision for success in the new digital economy.

Course Assessment and Design

The Center for Teaching and Learning.


If you have AI questions, please contact

WashU AI News

Is AI a risk or a catalyst for advancing EDI?  

Is AI perpetuating or bridging gender gaps? “Artificial Intelligence and Gender Equality,” a thought-provoking article by UN Women, explores this intriguing topic. With real-world examples, like gender-biased AI outputs and how AI systems are conceptualized and built, it uncovers the exciting potential for AI to help promote a more inclusive future.  

OCIO ImpacT Spotlight: New video highlights IT ImpacT efforts 

WashU IT’s Strategic Plan, ImpacT, aims to enhance decision-making by leveraging data through literacy, analytics, quality, and accessibility. Over four years, we’ll advance technology to boost WashU’s competitive edge and transform how we teach, learn, and innovate.   Russell Sharp and Amy Walter from the Office of the Chief Information Officer present “Innovate & Secure: The […]

Washington University ChatGPT Beta is Now Available

WashU IT and the WashU Digital Intelligence & Innovation Accelerator partnered to launch the new, a secure WashU ChatGPT Beta sandbox that allows the use of sensitive data and intellectual property, including information protected under HIPAA and FERPA. The tool is accessible to anyone with a Washington University affiliation through a WUSTL Key login and addresses the immediate […]

Artificial Intelligence: Recent Artificial Intelligence and Digital Health Summit was a success: Featuring speakers including CTO Greg Hart, as well as, panel discussions and more

An Artificial Intelligence (AI) and Digital Health Summit held at Washington University earlier this month brought a mix of people to campus to learn about ethical, legal, and regulatory aspects. AI’s impact on patient care and the quality of health care was an important topic covered, with presentations by subject matter experts and featured keynote […]

WashU Generative AI Environment Rollout Update

The Digital Intelligence & Innovation (DI2) Accelerator – part of the Here & Next Digital Transformation initiative – is working in collaboration with WashU IT to empower faculty, staff and students to effectively and responsibly evaluate and adopt AI tools and platforms, including generative AI technologies such as ChatGPT. Over the next several weeks, we […]