Hallucination
AI-generated information that sounds plausible but is factually incorrect.
Why it matters
An agent hallucinating and then acting on false information — sending wrong emails, making incorrect updates — is a business risk.
In practice
We mitigate hallucinations with RAG (grounding in real data), FAQ matching (pre-verified answers), and confidence thresholds.
Related terms
Guardrails
Rules and constraints that prevent an agent from taking harmful or unauthorized actions.
RAG (Retrieval-Augmented Generation)
A method where AI retrieves relevant information from external sources before generating a response.
Evaluations (Evals)
Systematic testing of agent performance: accuracy, safety, reliability.
Chain of Thought
Prompting technique that encourages the AI to reason step-by-step before answering.