Hallucination
When an AI generates false, made-up, or nonsensical information with confidence.
Hallucination is when an AI model invents information that isn't true. An AI might confidently cite a fake research paper, make up quotes attributed to real people, or describe events that never happened. The dangerous part is that hallucinations often sound plausible—the model doesn't say "I'm guessing"; it presents the false information as fact.
Why it happens: Models are trained to predict the next most likely token based on patterns in data. They don't "know" what's true or false; they predict what a plausible response looks like. If training data contained false claims (or if the pattern fits), the model reproduces them.
Mitigation: Fact-checking (grounding), retrieval-augmented generation (RAG), and prompting the model to cite sources all reduce hallucination rates—but hallucinations remain a significant limitation of large language models.
Example
Asking GPT-3 "What did Albert Einstein say about AI?" might return a plausible-sounding quote that Einstein never actually said.
Related terms
Grounding
Anchoring an AI's responses to factual data to reduce hallucination.
RAG (Retrieval-Augmented Generation)
A technique that combines document retrieval with AI generation to ground responses in factual data.
Safety Filter / Content Filter
A layer that blocks or modifies AI outputs that violate safety policies (hate speech, explicit content, etc.).