Abstract
Some claim language models understand us. Others won’t hear it. To clarify, I investigate three views of human language understanding: as-mapping, as-reliability and as-representation. I argue that while behavioral reliability is necessary for understanding, internal representations are sufficient; they climb the right hill. I review state-of-the-art language and multi-modal models: they are pragmatically challenged by under-specification of form. I question the Scaling Paradigm: limits on resources may prohibit scaled-up models from approaching understanding. Last, I describe how as-representation advances a science of understanding. We need work which probes model internals, adds more of human language, and measures what models can learn.- Anthology ID:
- 2022.findings-emnlp.16
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2022
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 214–222
- Language:
- URL:
- https://aclanthology.org/2022.findings-emnlp.16
- DOI:
- 10.18653/v1/2022.findings-emnlp.16
- Cite (ACL):
- Jared Moore. 2022. Language Models Understand Us, Poorly. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 214–222, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Language Models Understand Us, Poorly (Moore, Findings 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2022.findings-emnlp.16.pdf