Using Large Language Models to Perform MIPVU-Inspired Automatic Metaphor Detection

Sebastian Reimann, Tatjana Scheffler


Abstract
Automatic metaphor detection has often been inspired by linguistic procedures for manual metaphor identification. In this work, we test how closely the steps required by the Metaphor Identification Procedure VU Amsterdam (MIPVU) can be translated into prompts for generative Large Language Models (LLMs) and how well three commonly used LLMs are able to perform these steps. We find that while the procedure itself can be modeled with only a few compromises, neither language model is able to match the performance of supervised, fine-tuned methods for metaphor detection. All models failed to sufficiently filter out literal examples, where no contrast between the contextual and a more basic or concrete meaning was present. Both versions of LLaMa however signaled interesting potentials in detecting similarities between literal and metaphoric meanings that may be exploited in further work.
Anthology ID:
2025.analogyangle-1.2
Volume:
Proceedings of the 2nd Workshop on Analogical Abstraction in Cognition, Perception, and Language (Analogy-Angle II)
Month:
August
Year:
2025
Address:
Vienna, Austria
Editors:
Giulia Rambelli, Filip Ilievski, Marianna Bolognesi, Pia Sommerauer
Venues:
Analogy-Angle | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10–21
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.analogyangle-1.2/
DOI:
Bibkey:
Cite (ACL):
Sebastian Reimann and Tatjana Scheffler. 2025. Using Large Language Models to Perform MIPVU-Inspired Automatic Metaphor Detection. In Proceedings of the 2nd Workshop on Analogical Abstraction in Cognition, Perception, and Language (Analogy-Angle II), pages 10–21, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Using Large Language Models to Perform MIPVU-Inspired Automatic Metaphor Detection (Reimann & Scheffler, Analogy-Angle 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.analogyangle-1.2.pdf