Grounding
Anchoring an AI's responses to factual data to reduce hallucination.
Grounding is the practice of connecting an AI's output to real, verifiable information. If an AI writes a news article, grounding means backing up each claim with a source. If an AI answers a question about your company, grounding means feeding it your actual company docs first.
Grounding reduces hallucination by constraining the model to information you provide. Retrieval-augmented generation (RAG) is the most common grounding technique: search your documents, pass relevant snippets to the model, and let it answer based on those snippets instead of its training data alone.
Other grounding methods include fact-checking APIs, web search integration, and knowledge graph lookups. Grounding trades some flexibility for factual accuracy—a grounded model won't creatively invent answers, but it will only know what you grounded it with.
Example
Without grounding: "Your company makes AI tools for healthcare." (might be false). With grounding: model reads your about page first, then answers accurately based on that.