Hallucination

Level 1

Short Description

When a generative AI model produces output that is fluent and confident but factually incorrect or fabricated.

Friendly Description: A hallucination is when an AI makes something up that sounds right but isn't true. It's not lying on purpose; it just stitches together a confident-sounding answer from patterns it has seen. This is why it's smart to double-check important AI answers, the same way you'd double-check a friend who's a great storyteller but sometimes mixes up the details.

Example: If you ask an AI, "Who won the 1953 Nobel Prize in Physics?" and it confidently names someone who never actually won, that's a hallucination. The wording is smooth and the answer feels solid, but the fact itself is invented. Modern tools fight hallucination with techniques like grounding and retrieval.