Abstract
The uniform information density (UID) hypothesis states that humans tend to distribute information roughly evenly across an utterance or discourse. Early evidence in support of the UID hypothesis came from Genzel and Charniak (2002), which proposed an entropy rate constancy principle based on the probability of English text under n-gram language models. We re-evaluate the claims of Genzel and Charniak (2002) with neural language models, failing to find clear evidence in support of entropy rate constancy. We conduct a range of experiments across datasets, model sizes, and languages and discuss implications for the uniform information density hypothesis and linguistic theories of efficient communication more broadly.- Anthology ID:
- 2023.findings-emnlp.1039
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 15537–15549
- Language:
- URL:
- https://aclanthology.org/2023.findings-emnlp.1039
- DOI:
- 10.18653/v1/2023.findings-emnlp.1039
- Cite (ACL):
- Vivek Verma, Nicholas Tomlin, and Dan Klein. 2023. Revisiting Entropy Rate Constancy in Text. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15537–15549, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Revisiting Entropy Rate Constancy in Text (Verma et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2023.findings-emnlp.1039.pdf