Connecting a model's output to verifiable, external sources of truth (such as documents or databases) to reduce hallucination.
Friendly Description: Grounding means giving an AI access to real, trustworthy information so its answers stay tied to facts rather than guesses. It's like asking a friend a question: if they're guessing from memory, they might be off, but if they look it up in a reliable book first, they're much more accurate. Grounding helps AI cite real sources and reduces those moments where it confidently makes things up.
Example: If you ask a grounded AI assistant, "What's our company's vacation policy?" it can pull the actual policy document from your HR system, read it, and answer based on what it found, rather than guessing. It might even quote the exact paragraph so you can verify.