How Do Large Language Models Evaluate Lexical Complexity?

Abdelhak Kelious, Mathieu Constant, Christophe Coeur


Abstract
In this work, we explore the prediction of lexical complexity by combining supervised approaches and the use of large language models (LLMs). We first evaluate the impact of different prompting strategies (zero-shot, one-shot, and chain-of-thought) on the quality of the predictions, comparing the results with human annotations from the CompLex 2.0 corpus. Our results indicate that LLMs, and in particular gpt-4o, benefit from explicit instructions to better approximate human judgments, although some discrepancies remain. Moreover, a calibration approach to better align LLMs predictions and human judgements based on few manually annotated data appears as a promising solution to improve the reliability of the annotations in a supervised scenario.
Anthology ID:
2025.starsem-1.28
Volume:
Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Lea Frermann, Mark Stevenson
Venue:
*SEM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
348–361
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.28/
DOI:
Bibkey:
Cite (ACL):
Abdelhak Kelious, Mathieu Constant, and Christophe Coeur. 2025. How Do Large Language Models Evaluate Lexical Complexity?. In Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025), pages 348–361, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
How Do Large Language Models Evaluate Lexical Complexity? (Kelious et al., *SEM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.28.pdf