Course:AI & Tools Literacy/
Lesson

You ask an AI for help with a coding problem. The solution looks perfect. You try it, it doesn't work. The function it suggested doesn't even exist. The AI made it up.

This is a hallucinationWhat is hallucination?When an AI generates confident but false information - fabricated facts, invented citations, or non-existent code methods.: when an AI generates information that sounds plausible and authoritative but is actually false, fabricated, or nonsensical. It's not confusion, it's the AI confidently producing something that never existed.

Examples of hallucinations

TypeExampleThe Reality
Fake citations"According to a 2023 study by Dr. Sarah Chen at MIT..."Dr. Sarah Chen doesn't exist, or the study was never published
Invented facts"The first email was sent in 1978 by Ray Tomlinson"Tomlinson sent the first email in 1971
Non-existent code"Use the Array.prototype.flatten() method"No such method exists in JavaScript
Wrong URLs"You can read more at example.com/article"The page doesn't exist
Made-up people"Software architect Jane Morrison recommends..."Jane Morrison isn't a real person

Why do AI models hallucinate?

The prediction problem

LLMs predict the next tokenWhat is token?The smallest unit of text an LLM processes - roughly three-quarters of a word. API pricing is based on how many tokens you use. based on patterns in their training data. They don't have a database of facts to check against. When you ask for a specific fact:

  1. The model looks for patterns similar to your question
  2. It generates text that fits the pattern of a correct answer
  3. If the training data is unclear, it still generates something that sounds right

It's autocomplete that tries to be helpful even when it doesn't know the answer, more concerned with fluency than accuracy.

Training data gaps

Models have knowledge cutoffs. When asked about events or technologies that emerged after training, the model may confidently mix up details, attribute things to wrong sources, or fabricate plausible-sounding information.

The confidence trap

AI models are trained to sound authoritative. Confident, well-structured answers score higher in human ratings, even if wrong. This creates a bias toward sounding certain.

02

Common types of hallucinations

Code hallucinations

Code hallucinations can be especially subtle:

// AI might suggest this:
const result = myArray.sortBy('name').reverse().groupBy('category');

// These methods don't exist in vanilla JavaScript!
// Array.prototype has no sortBy, reverse returns array (chainable), 
// but no groupBy either

Why code hallucinations happen:

  • Training data includes many languages, libraries, and versions
  • The model mixes up similar APIs from different frameworks
  • Library versions change; training data may be outdated
  • Methods from one language get suggested in another

Citation hallucinations

AI generates realistic-looking citations that are completely fabricated:

"According to a 2022 study published in the Journal of Cognitive Science
by Dr. Emily Rodriguez, AI adoption increased by 340% in healthcare..."

This has a date, journal name, researcher name, and statistic. The entire citation might be invented.

03

Real-world examples of AI errors

The lawyer who cited fake cases: In 2023, a lawyer used ChatGPT to research legal precedents. The AI generated fabricated case names, docket numbers, and quotes. The lawyer submitted these to court without verification, leading to sanctions.

The chatbot that recommended eating rocks: Google's AI Overviews suggested users eat "at least one small rock per day" for minerals, from a satirical article the model failed to recognize as humor.

Medical misinformation: Healthcare chatbots have provided wrong medication dosages and incorrect treatment protocols. In one case, a chatbot told someone with a serious allergic reaction to "take an antihistamine and restWhat is rest?An architectural style for web APIs where URLs represent resources (nouns) and HTTP methods (GET, POST, PUT, DELETE) represent actions on those resources." instead of seeking emergency care.

Dangerous code: Stack Overflow temporarily banned AI-generated answers because many contained bugs, security vulnerabilities, or destructive operations, including suggesting rm -rf / (which deletes everything on a Unix system).

04

How to spot suspicious outputs

Red flags to watch for

Warning SignWhat to Do
Overly specific details without sourcesAsk for citations, then verify them
Too-good-to-be-true solutionsTest thoroughly, don't copy-paste blindly
Vague or evasive responses to follow-up questionsBe skeptical of changing details
Confident tone on topics you know are uncertainRemember confidence ≠ accuracy
Recent events or very new technologiesVerify against current sources
Unfamiliar function names or APIsCheck official documentation

The verification checklist

Before trusting AI output on important matters:

  1. Check the basics: Verify dates, names, locations with a quick search
  2. Trace the sources: If the AI cites a study, find the actual paper
  3. Test the code: Run it in a safe environment before production use
  4. Cross-reference: Compare with other sources or AI models
  5. Trust your expertise: If something feels off, investigate further

When to be extra cautious

High-stakes situations where verification is critical:

  • Medical or health advice
  • Legal information
  • Financial decisions
  • Safety-critical code
  • Academic citations
  • News about current events

Good to know
Recent models have improved at saying "I don't know" or qualifying uncertain information. But even the best models hallucinate, especially on niche topics or recent events.
05

Reducing hallucinations in your prompts

You can't eliminate hallucinations entirely, but you can reduce them:

Ask for uncertainty

"Explain quantum computing. If you're uncertain about anything, 
say so and explain why."

Request sources

"List 3-5 sources I can check to verify this information. 
Only include sources you're confident exist."

Break complex tasks into steps

Instead of asking for a complete solution, guide the AI through verification:

Step 1: Identify the approach
Step 2: Explain why this approach works
Step 3: Show me the code
Step 4: Explain any limitations or edge cases

Provide context (RAGWhat is rag?Retrieval-Augmented Generation - providing an AI with relevant documents alongside your prompt so it grounds its response in verified information instead of relying solely on training data.)

When possible, provide the AI with relevant documents or context rather than asking it to rely on training data alone. This grounds the response in verified information.


Hallucinations are a fundamental limitation of current AI systems, a consequence of how LLMs work, not a bug to be patched away. Your job is to build verification habits that catch false information before it causes problems.