What Makes Language Models Good-enough?

Daiki Asami, Saku Sugawara


Abstract
Psycholinguistic research suggests that humans may build a representation of linguistic input that is ‘good-enough’ for the task at hand. This study examines what architectural features make language models learn human-like good-enough language processing. We focus on the number of layers and self-attention heads in Transformers. We create a good-enough language processing (GELP) evaluation dataset (7,680 examples), which is designed to test the effects of two plausibility types, eight construction types, and three degrees of memory cost on language processing. To annotate GELP, we first conduct a crowdsourcing experiment whose design follows prior psycholinguistic studies. Our model evaluation against the annotated GELP then reveals that the full model as well as models with fewer layers and/or self-attention heads exhibit a good-enough performance. This result suggests that models with shallower depth and fewer heads can learn good-enough language processing.
Anthology ID:
2024.findings-acl.913
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15453–15467
Language:
URL:
https://aclanthology.org/2024.findings-acl.913
DOI:
10.18653/v1/2024.findings-acl.913
Bibkey:
Cite (ACL):
Daiki Asami and Saku Sugawara. 2024. What Makes Language Models Good-enough?. In Findings of the Association for Computational Linguistics ACL 2024, pages 15453–15467, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
What Makes Language Models Good-enough? (Asami & Sugawara, Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2024.findings-acl.913.pdf