Abstract
The article presents results of entropy rate estimation for human languages across six languages by using large, state-of-the-art corpora of up to 7.8 gigabytes. To obtain the estimates for data length tending to infinity, we use an extrapolation function given by an ansatz. Whereas some ansatzes of this kind were proposed in previous research papers, here we introduce a stretched exponential extrapolation function that has a smaller error of fit. In this way, we uncover a possibility that the entropy rates of human languages are positive but 20% smaller than previously reported.- Anthology ID:
- W16-4124
- Volume:
- Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity (CL4LC)
- Month:
- December
- Year:
- 2016
- Address:
- Osaka, Japan
- Editors:
- Dominique Brunato, Felice Dell’Orletta, Giulia Venturi, Thomas François, Philippe Blache
- Venue:
- CL4LC
- SIG:
- Publisher:
- The COLING 2016 Organizing Committee
- Note:
- Pages:
- 213–221
- Language:
- URL:
- https://aclanthology.org/W16-4124
- DOI:
- Cite (ACL):
- Ryosuke Takahira, Kumiko Tanaka-Ishii, and Łukasz Dębowski. 2016. Upper Bound of Entropy Rate Revisited —A New Extrapolation of Compressed Large-Scale Corpora—. In Proceedings of the Workshop on Computational Linguistics for Linguistic Complexity (CL4LC), pages 213–221, Osaka, Japan. The COLING 2016 Organizing Committee.
- Cite (Informal):
- Upper Bound of Entropy Rate Revisited —A New Extrapolation of Compressed Large-Scale Corpora— (Takahira et al., CL4LC 2016)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/W16-4124.pdf