## Definition
Hallucination refers to AI models generating incorrect or fabricated content while appearing confident.
## How It Works
It happens when the model extrapolates beyond its training data or lacks grounding from external knowledge.
## Examples or Use Cases
Seen in chatbots or summarizers that produce wrong facts or nonexistent references.
## Related Terms
– [RAG](#)
– [LLM](#)
– [Prompt Engineering](#)
## Summary
AI hallucination highlights the need for verification and retrieval mechanisms to ensure reliability.
Hallucination in AI – Meaning and Examples