Skip to Main Content

Artificial Intelligence Tools and Resources for Law Students

How Generative AI Can Make Mistakes

AI can be wrong in a number of ways:

Hallucinations: are the creation of false or misleading information that seems plausible but is not grounded in reality. It is textual or visual content that includes fabricated details or incorrect facts. Hallucinations happen because AI models predict patterns rather than truly understanding your query, leading to outputs that can sound convincing but are wrong. For example, cases that do not exist or images of people with seven fingers.

Bias: refers to the systematic and unfair skewing of outcomes produced by AI systems, often reflecting or amplifying existing societal prejudices. This bias can arise from various sources, such as unrepresentative or imbalanced training data, flawed algorithmic design, or the inadvertent introduction of human assumptions during development. For example, if an AI system is trained on historical data that contains discriminatory patterns, it may perpetuate those biases in its predictions or decisions.

Context: In cases where Gen AI tools respond with information, like a case, that case might exist (not a hallucination), but the LLM may use it out of the context you want. For example, it produces an ADA landlord-tenant case instead of a sales contract case from a sales prompt. 

Cutoff Date: Gen AI tools are trained up to a certain date. If you ask questions that require information after that date, the tool will not be able to give you accurate responses. AI platforms that are capable of web browsing tend to have more up to date information, as they can perform a real time search; however this does not mean they are using the most up-to-date information.

Resources

When AI Gets It Wrong: Addressing Hallucinations and Bias

AI - The good, the bad, and the scary

What is ChatGPT and What Should Students Know about AI Chatbots?

AI Concerns (YouTube, 3 mins)

One specific reason to check any information you plan to use or cite from a gen AI program is exemplified by the attorneys who continue to cite hallucinated cases purported to be real by the programs. We encourage you to not be one of these attorneys.

Breaking Bad Briefs: A Snapshot of Lawyers, Litigants, and Experts’ Use (and Misuse) of GenAI in Court Filings by Professor Heidi Brown