Abstract
Recent advances in LLMs have sparked a debate on whether they understand text. In this position paper, we argue that opponents in this debate hold different definitions for understanding, and particularly differ in their view on the role of consciousness. To substantiate this claim, we propose a thought experiment involving an open-source chatbot Z which excels on every possible benchmark, seemingly without subjective experience. We ask whether Z is capable of understanding, and show that different schools of thought within seminal AI research seem to answer this question differently, uncovering their terminological disagreement. Moving forward, we propose two distinct working definitions for understanding which explicitly acknowledge the question of consciousness, and draw connections with a rich literature in philosophy, psychology and neuroscience.- Anthology ID:
- 2024.findings-acl.425
- Volume:
- Findings of the Association for Computational Linguistics ACL 2024
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand and virtual meeting
- Editors:
- Lun-Wei Ku, Andre Martins, Vivek Srikumar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7137–7143
- Language:
- URL:
- https://aclanthology.org/2024.findings-acl.425
- DOI:
- Cite (ACL):
- Ariel Goldstein and Gabriel Stanovsky. 2024. Do Zombies Understand? A Choose-Your-Own-Adventure Exploration of Machine Cognition. In Findings of the Association for Computational Linguistics ACL 2024, pages 7137–7143, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
- Cite (Informal):
- Do Zombies Understand? A Choose-Your-Own-Adventure Exploration of Machine Cognition (Goldstein & Stanovsky, Findings 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.findings-acl.425.pdf