[Synthese] Why Mental Metaphors Do Not Help Us Understand Chatbot Mistakes

Year and volume: 2026 (Vol. 207).

Author(s): Markus Pantsar; Regina E. Fabry.

Abstract: The function of chatbots like OpenAI’s ChatGPT is based on detecting probabilistic patterns in the training data. This makes them vulnerable to generating factual mistakes in their outputs. Recently, it has become commonplace in philosophical, scientific, and popular discourses to capture such mistakes by metaphors that draw on discourses about the human mind. The two most popular metaphors at present are hallucinating and bullshitting. In this paper, we review, discuss, and criticise these mental metaphors. By applying conceptual metaphor theory, we provide numerous reasons why they do not succeed in providing us with a better understanding of factual chatbot mistakes. We conclude by calling for justifications of the epistemic feasibility and fruitfulness of the metaphors at issue. Furthermore, we raise the question what would be lost if we stopped trying to capture factual chatbot mistakes by mental metaphors.

Doi: https://doi.org/10.1007/s11229-026-05551-8

답글 남기기

이메일 주소는 공개되지 않습니다. 필수 필드는 *로 표시됩니다