Lower Perplexity is Not Always Human-Like
Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui
Abstract
In computational psycholinguistics, various language models have been evaluated against human reading behavior (e.g., eye movement) to build human-like computational models. However, most previous efforts have focused almost exclusively on English, despite the recent trend towards linguistic universal within the general community. In order to fill the gap, this paper investigates whether the established results in computational psycholinguistics can be generalized across languages. Specifically, we re-examine an established generalization —the lower perplexity a language model has, the more human-like the language model is— in Japanese with typologically different structures from English. Our experiments demonstrate that this established generalization exhibits a surprising lack of universality; namely, lower perplexity is not always human-like. Moreover, this discrepancy between English and Japanese is further explored from the perspective of (non-)uniform information density. Overall, our results suggest that a cross-lingual evaluation will be necessary to construct human-like computational models.- Anthology ID:
- 2021.acl-long.405
- Original:
- 2021.acl-long.405v1
- Version 2:
- 2021.acl-long.405v2
- Volume:
- Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Editors:
- Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
- Venues:
- ACL | IJCNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5203–5217
- Language:
- URL:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2021.acl-long.405/
- DOI:
- 10.18653/v1/2021.acl-long.405
- Cite (ACL):
- Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, and Kentaro Inui. 2021. Lower Perplexity is Not Always Human-Like. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5203–5217, Online. Association for Computational Linguistics.
- Cite (Informal):
- Lower Perplexity is Not Always Human-Like (Kuribayashi et al., ACL-IJCNLP 2021)
- PDF:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2021.acl-long.405.pdf
- Code
- kuribayashi4/surprisal_reading_time_en_ja