Artificial Intelligence Interim Guidelines

Last updated: March 4, 2024

These interim guidelines serve to help make security and privacy decisions while we await the recommendations of the UW AI Task Force.

This guidance from the Office of Information Security and UW Privacy Office provides the University of Washington (UW) community with considerations for the responsible security and privacy use of generative Artificial Intelligence (AI) at the UW. As AI technology evolves, so will the guidelines and expectations within which we, as an institution, work within to safeguard UW information for our students, faculty, and staff.

We recognize the shared aspiration to use this technology. As UW delves deeper into AI, protecting sensitive data remains a paramount concern. Use caution when considering external AI systems, as they may not offer the same level of security and privacy as those within the University-managed environment.

Safeguarding Your Sensitive Information

Remember, data containing personally identifiable information (PII), protected health information (PHI), or other sensitive non-public details should not be uploaded or processed on external AI platforms. These external platforms may lack the safeguards and compliance measures built into UW-owned and managed internal systems that protect the data within the secure UW-IT infrastructure covered under federal and state regulations and supported with documented responsibilities and protections within business agreements between AI providers and the UW.

Misinformation

When providing textual output, popular generative AIs function by predicting the next likely word or punctuation. They do not understand meaning. Generative AIs are therefore prone to producing text which has untruthful meaning while simultaneously sounding true. It may fabricate citations or disregard or discredit true statements.

Pay attention to the accuracy of the information you’re using. Generative AI content may warrant additional scrutiny. Since you may be exposed to generative AI content without knowing it, you may need to expend additional effort validating information generally.

Pay even more attention to avoid threats such as phishing even if you’re not a generative AI user because generative AI will make these attacks easier and more effective. The plausibility of generative AI output also makes it a valuable tool for bad actors.

Generative AIs incorporate and may amplify biases present in training data. Results incorporate and reinforce these biases, which  negatively impact historically marginalized peoples disproportionately.

Consider how personal knowledge of language affects use. Many generative AIs favor Standard American English. Languages that have limited representation in training sets receive lower-quality results. Pay particular attention to how bias may affect outputs and influence important decisions.

Confidentiality

Most generative AIs incorporate everything you send to them into their model. That includes the prompts you provide and any files or other data you supply. Appropriate protections must be in place before data are shared with outside entities. Much University data is required by law to be protected in particular ways. Most generally available generative AIs do not offer terms that are consistent with our obligations to protect that data.

By default with generative AIs, only provide prompts and other data that is appropriate for sharing publicly. Be wary of further entrenching the biases of generative AI models.

In addition to data actively provided to generative AIs, vast amounts of data are scraped from many sources their creators can access. Approaches to respecting authorial rights varies among generative AI organizations.

Consider what type of data you are working with and how and where you are making it available. Once it is publicly available, there may be little or no opportunity to prevent its use by generative AI.

What Is AI?

The following are a small collection of AI-terms provided by Microsoft to assist in explaining the various terms commonly used for Artificial Intelligence.

Source: 10 AI terms everyone should know

AI Terms

Artificial intelligence is basically a super-smart computer system that can imitate humans in some ways, like comprehending what people say, making decisions, translating between languages, analyzing if something is negative or positive, and even learning from experience. It’s artificial in that its intellect was created by humans using technology. Sometimes people say AI systems have digital brains, but they’re not physical machines or robots — they’re programs that run on computers. They work by putting a vast collection of data through algorithms, which are sets of instructions, to create models that can automate tasks that typically require human intelligence and time. Sometimes people specifically engage with an AI system — like asking Bing Chat for help with something — but more often the AI is happening in the background all around us, suggesting words as we type, recommending songs in playlists and providing more relevant information based on our preferences.

Generative AI leverages the power of large language models to make new things, not just regurgitate or provide information about existing things. It learns patterns and structures and then generates something that’s similar but new. It can make things like pictures, music, text, videos, and code. It can be used to create art, write stories, design product and even help doctors with administrative tasks. But it can also be used by bad actors to create fake news or pictures that look like photographs but aren’t real, so tech companies are working on ways to clearly identify AI-generated content.

Large language models, or LLMs, use machine learning techniques to help them process language so they can mimic the way humans communicate. They’re based on neural networks, or NNs, which are computing systems inspired by the human brain — sort of like a bunch of nodes and connections that simulate neurons and synapses. They are trained on a massive amount of text to learn patterns and relationships in language that help them use human words. Their problem-solving capabilities can be used to translate languages, answer questions in the form of a chatbot, summarize text and even write stories, poems and computer code. They don’t have thoughts or feelings, but sometimes they sound like they do, because they’ve learned patterns that help them respond the way a human might. They’re often fine-tuned by developers using a process called reinforcement learning from human feedback (RLHF) to help them sound more conversational.

If artificial intelligence is the goal, machine learning is how we get there. It’s a field of computer science, under the umbrella of AI, where people teach a computer system how to do something by training it to identify patterns and make predictions based on them. Data is run through algorithms over and over, with different input and feedback each time to help the system learn and improve during the training process — like practicing piano scales 10 million times in order to sight-read music going forward. It’s especially helpful with problems that would otherwise be difficult or impossible to solve using traditional programming techniques, such as recognizing images and translating languages. It takes a huge amount of data, and that’s something we’ve only been able to harness in recent years as more information has been digitized and as computer hardware has become faster, smaller, more powerful and better able to process all that information.

Generative AI systems can create stories, poems and songs, but sometimes we want results to be based in truth. Since these systems can’t tell the difference between what’s real and fake, they can give inaccurate responses that developers refer to as hallucinations, or the more accurate term, fabrications — much like if someone saw what looked like the outlines of a face on the moon and began saying there was an actual man in the moon. Developers try to resolve these issues through “grounding,” which is when they provide an AI system with additional information from a trusted source to improve accuracy about a specific topic. Sometimes a system’s predictions are wrong, too, if a model doesn’t have current information after it’s trained.

Responsible AI guides people as they try to design systems that are safe and fair — at every level, including the machine learning model, the software, the user interface and the rules and restrictions put in place to access an application. It’s a crucial element because these systems are often tasked with helping make important decisions about people, such as in education and healthcare, but since they’re created by humans and trained on data from an imperfect world, they can reflect any inherent biases. A big part of responsible AI involves understanding the data that was used to train the systems and finding ways to mitigate any shortcomings to help better reflect society at large, not just certain groups of people.

A prompt is an instruction entered into a system in language, images or code that tells the AI what task to perform. Engineers — and really all of us who interact with AI systems — must carefully design prompts to get the desired outcome from the large language models. It’s like placing your order at a deli counter: You don’t just ask for a sandwich, but you specify which bread you want and the type and amounts of condiments, vegetables, cheese and meat to get a lunch that you’ll find delicious and nutritious.

 

Additional Resources

Resources

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
https://dl.acm.org/doi/10.1145/3442188.3445922

Teaching@UW: ChatGPT and other AI-based tools

Artificial Intelligence Policy: A Primer and Roadmap:
https://digitalcommons.law.uw.edu/faculty-articles/640/

Can AI help boost accessibility? These researchers tested it for themselves:
https://www.washington.edu/news/2023/11/02/ai-accessibility-chatgpt-midjourney-ableist/

National AI Initiative:
https://www.ai.gov/

NIST Artificial Intelligence:
https://www.nist.gov/artificial-intelligence

Council of the Great City Schools:
K-12 Generative AI Readiness Checklist

Multi agency white paper on AI use in deep fake threats (12th September 2023):
https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF

Cybersecurity and Infrastructure Security Agency Software Must Be Secure by Design, and Artificial Intelligence Is No Exception:
https://www.cisa.gov/news-events/news/software-must-be-secure-design-and-artificial-intelligence-no-exception

University of California, Berkeley:
https://cltc.berkeley.edu/program/ai-security-initiative/
https://www.ischool.berkeley.edu/news/2022/uc-berkeley-launches-ai-policy-hub

Harvard:
https://huit.harvard.edu/ai/guidelines

University of Toronto:
https://security.utoronto.ca/framework/guidelines/use-ai-intelligently/

MIT on AI:
https://internetpolicy.mit.edu/research/machineunderstanding/

Center for digital government – A national research and advisory institute focused on technology policy and best practices in state and local government:
https://www.govtech.com/cdg

Guidance on the Appropriate Use of Generative Artificial Intelligence in Graduate Theses:
https://www.sgs.utoronto.ca/about/guidance-on-the-use-of-generative-artificial-intelligence/

Questions?

Contact help@uw.edu with questions and concerns.