A Geometric Taxonomy of Hallucinations in LLMs
This paper proposes a geometric taxonomy of LLM hallucinations into three distinct types (unfaithfulness, confabulation, and factual error) and introduces corresponding detection metrics, the Semantic Grounding Index and Directional Grounding Index, which effectively identify unfaithful and confabulated outputs while revealing that apparent signals for factual errors in existing benchmarks often stem from stylistic annotation confounds rather than genuine geometric distinctions.