[TOC]

  1. Title: Hallucination Is Inevitable an Innate Limitation Llm 2024
  2. Author: Ziwei Xu et. al.
  3. Publish Year: 22 Jan 2024
  4. Review Date: Sun, Jan 28, 2024
  5. url: arXiv:2401.11817v1

Summary of paper

Contribution

The paper formalizes the issue of hallucination in large language models (LLMs) and argues that it is impossible to completely eliminate hallucination. It defines hallucination as inconsistencies between a computable LLM and a computable ground truth function. By drawing from learning theory, the paper demonstrates that LLMs cannot learn all computable functions, thus always prone to hallucination. The formal world is deemed a simplified representation of the real world, implying that hallucination is inevitable for real-world LLMs. Additionally, for real-world LLMs with provable time complexity constraints, the paper identifies tasks prone to hallucination and provides empirical validation. Finally, the paper evaluates existing hallucination mitigators using the formal world framework and discusses practical implications for the safe deployment of LLMs.

Some key terms

hallucination

Formal definition of hallucination is difficult

Results

In defence of LLMs and Hallucination

LLMs are continuously evolving, with advancements in model architecture and error correction strategies expected to mitigate the severity of hallucinations over time. While complete elimination is improbable, researchers aim to better understand and control hallucination for various applications.

Moreover, hallucination is not entirely negative. In creative fields like art and literature, the unintended outputs from LLMs can inspire human creators, offering unique perspectives and fostering innovation. Thus, the hallucinatory aspect of LLMs can be viewed positively as a source of creativity and inspiration.

Practical implications

Guardrails and Fences are Essential: Without proper guardrails and fences, LLMs cannot be relied upon for critical decision-making. These mechanisms are designed to ensure that LLMs operate within expected boundaries and do not deviate into unethical, disturbing, or destructive content. Given the inevitability of hallucination, guardrails and fences are deemed essential safeguards.

Summary

Possible hallucination mitigators