Evaluating Pretrained Causal Language Models for Synonymy

Ioana Ivan, Carlos Ramisch, Alexis Nasr


Abstract
The scaling of causal language models in size and training data enabled them to tackle increasingly complex tasks. Despite the development of sophisticated tests to reveal their new capabilities, the underlying basis of these complex skills remains unclear. We argue that complex skills might be explained using simpler ones, represented by linguistic concepts. As an initial step in exploring this hypothesis, we focus on the lexical-semantic concept of synonymy, laying the groundwork for research into its relationship with more complex skills. We develop a comprehensive test suite to assess various aspects of synonymy under different conditions, and evaluate causal open-source models ranging up to 10 billion parameters. We find that these models effectively recognize synonymy but struggle to generate synonyms when prompted with relevant context.
Anthology ID:
2025.findings-acl.649
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12533–12551
Language:
URL:
https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.649/
DOI:
10.18653/v1/2025.findings-acl.649
Bibkey:
Cite (ACL):
Ioana Ivan, Carlos Ramisch, and Alexis Nasr. 2025. Evaluating Pretrained Causal Language Models for Synonymy. In Findings of the Association for Computational Linguistics: ACL 2025, pages 12533–12551, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Evaluating Pretrained Causal Language Models for Synonymy (Ivan et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.649.pdf