The materials in the databases we subscribe to should be ADA compliant including the use of screen reader technology, keyboards, etc. Some older documents may not be as accessible. The Mendik Library staff tries to ensure that our website is accessible for all patrons, but issues can occur. If you experienced a problem with the website or would like to offer a suggestion, please go to and complete this form.
Artificial Intelligence (AI), refers to the simulation of human intelligence in machines that are programmed to think, learn, and problem-solve. It involves technologies like machine learning and natural language processing, enabling computers to improve their performance over time. This guide will be updated to reflect new strategies, tools, resources and more as AI and the issues surrounding it evolve.
If you have questions about if/how you can use AI in class, ask your professor. Not all professors have the same policies.
Downpage: Common Terms
Below are some key terms you will need to know when studying and working with AI.
Bias refers to the systematic and unfair skewing of outcomes produced by artificial intelligence systems, often reflecting or amplifying existing societal prejudices. This bias can arise from various sources, such as unrepresentative or imbalanced training data, or the inadvertent introduction of human assumptions during development. For example, if an AI system is trained on historical data that contains discriminatory patterns, it may perpetuate those biases in its predictions or decisions.
Chatbots are computer programs that simulate conversation with human users. This term often refers to, for example, the chat option for help on a website.
Generative Artificial Intelligence (Gen AI) uses large datasets to create new, original content, including text, images, audio, code, and more. Gen AI focuses less on analysis, categorization, or decision-making and more on simulating human-like interactions than other AI models.
Hallucinations are the creation of false or misleading information that seems plausible but is not grounded in reality. Hallucinations happen because AI models predict patterns rather than truly understanding the world, leading to outputs that can sound convincing but are wrong.
Large Language Model (LLM) is a complex model trained on vast amounts of data that generates language that resembles human-generated language. Open AI's GPT series (powering ChatGPT, Microsoft CoPilot, among others), Anthropic (powering Claude), Google's Gemini, and Meta's LLaMA are examples of LLMs.
Prompt refers to the input text or instructions given to an AI model to generate a response.
Retrieval Augmented Generation (RAG) Retrieval Augmented Generation (RAG) is an AI method that improves responses by first consulting outside sources, for instance performing a web search, before generating an answer. This makes the output more accurate, up-to-date, and tailored to specific topics.
Training refers to the process of teaching a LLM to make accurate predictions or decisions by exposing it to labeled examples and adjusting based on its errors. Not all AI programs are trained on the same material in the same way. Using bias as an example, in one program a prompt requesting a stock photo of a doctor may only produce images of men, while a different platform may produce more diverse results based on its training.