Entailment Semantics Can Be Extracted from an Ideal Language Model

William Merrill, Alex Warstadt, Tal Linzen


Abstract
Language models are often trained on text alone, without additional grounding. There is debate as to how much of natural language semantics can be inferred from such a procedure. We prove that entailment judgments between sentences can be extracted from an ideal language model that has perfectly learned its target distribution, assuming the training sentences are generated by Gricean agents, i.e., agents who follow fundamental principles of communication from the linguistic theory of pragmatics. We also show entailment judgments can be decoded from the predictions of a language model trained on such Gricean data. Our results reveal a pathway for understanding the semantic information encoded in unlabeled linguistic data and a potential framework for extracting semantics from language models.
Anthology ID:
2022.conll-1.13
Original:
2022.conll-1.13v1
Version 2:
2022.conll-1.13v2
Volume:
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Antske Fokkens, Vivek Srikumar
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
176–193
Language:
URL:
https://aclanthology.org/2022.conll-1.13
DOI:
10.18653/v1/2022.conll-1.13
Bibkey:
Cite (ACL):
William Merrill, Alex Warstadt, and Tal Linzen. 2022. Entailment Semantics Can Be Extracted from an Ideal Language Model. In Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL), pages 176–193, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Entailment Semantics Can Be Extracted from an Ideal Language Model (Merrill et al., CoNLL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2022.conll-1.13.pdf
Video:
 https://preview.aclanthology.org/dois-2013-emnlp/2022.conll-1.13.mp4