Do Language Models Have Semantics? On the Five Standard Positions

Anders Søgaard


Abstract
We identify five positions on whether large language models (LLMs) and chatbots can be said to exhibit semantic understanding. These positions differ in whether they attribute semantics to LLMs and/or chatbots trained on feedback, what kind of semantics they attribute (inferential or referential), and in virtue of what they attribute referential semantics (internal or external causes). This allows for 2^^4=16 logically possible positions, but we have only seen people argue for five of these. Based on a pairwise comparison of these five positions, we conclude that the better theory of semantics in large language models is, in fact, a sixth combination: Both large language models and chatbots have inferential and referential semantics, grounded in both internal and external causes.
Anthology ID:
2025.acl-long.1258
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25910–25922
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1258/
DOI:
Bibkey:
Cite (ACL):
Anders Søgaard. 2025. Do Language Models Have Semantics? On the Five Standard Positions. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 25910–25922, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Do Language Models Have Semantics? On the Five Standard Positions (Søgaard, ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1258.pdf