Shuxu Li
2025
Assessing LLMs’ Understanding of Structural Contrasts in the Lexicon
Shuxu Li
|
Antoine Venant
|
Philippe Langlais
|
François Lareau
Proceedings of the 16th International Conference on Computational Semantics
We present a new benchmark to evaluate the lexical competence of large language models (LLMs), built on a hierarchical classification of lexical functions (LFs) within the Meaning-Text Theory (MTT) framework. Based on a dataset called French Lexical Network (LN-fr), the benchmark employs contrastive tasks to probe the models’ sensitivity to fine-grained paradigmatic and syntagmatic distinctions. Our results show that performance varies significantly across different LFs and systematically declines with increased distinction granularity, highlighting current LLMs’ limitations in relational and structured lexical understanding.