Metaphor and Large Language Models: When Surface Features Matter More than Deep Understanding

Elisa Sanchez-Bayona, Rodrigo Agerri


Abstract
This paper presents a comprehensive evaluation of the capabilities of Large Language Models (LLMs) in metaphor interpretation across multiple datasets, tasks, and prompt configurations. Although metaphor processing has gained significant attention in Natural Language Processing (NLP), previous research has been limited to single-dataset evaluations and specific task settings, often using artificially constructed data through lexical replacement. We address these limitations by conducting extensive experiments using diverse publicly available datasets with inference and metaphor annotations, focusing on Natural Language Inference (NLI) and Question Answering (QA) tasks. The results indicate that LLMs’ performance is more influenced by features like lexical overlap and sentence length than by metaphorical content, demonstrating that any alleged emergent abilities of LLMs to understand metaphorical language are the result of a combination of surface-level features, in-context learning, and linguistic knowledge. This work provides critical insights into the current capabilities and limitations of LLMs in processing figurative language, highlighting the need for more realistic evaluation frameworks in metaphor interpretation tasks. Data and code publicly available: https://github.com/elisanchez-beep/metaphorLLM
Anthology ID:
2025.findings-acl.898
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17462–17477
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.898/
DOI:
Bibkey:
Cite (ACL):
Elisa Sanchez-Bayona and Rodrigo Agerri. 2025. Metaphor and Large Language Models: When Surface Features Matter More than Deep Understanding. In Findings of the Association for Computational Linguistics: ACL 2025, pages 17462–17477, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Metaphor and Large Language Models: When Surface Features Matter More than Deep Understanding (Sanchez-Bayona & Agerri, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.898.pdf