Reasoning Beyond Literal: Cross-style Multimodal Reasoning for Figurative Language Understanding

Seyyed Saeid Cheshmi, Hahnemann Ortiz, James Mooney, Dongyeop Kang


Abstract
Vision–language models (VLMs) have demonstrated strong reasoning abilities in literal multimodal tasks such as visual mathematics and science question answering. However, figurative language—such as sarcasm, humor, and metaphor—remains a significant challenge, as it conveys intent and emotion through subtle incongruities between expressed and intended meanings. In multimodal settings, accompanying images can amplify or invert textual meaning, demanding models that reason across modalities and account for subjectivity.We propose a three-step framework for developing efficient multimodal reasoning models that can (i) interpret multimodal figurative language, (ii) provide transparent reasoning traces, and (iii) generalize across multiple figurative styles. Experiments across four styles show that (1) incorporating reasoning traces substantially improves multimodal figurative understanding, (2) reasoning learned in one style can transfer to others—especially between related styles like sarcasm and humor, and (3) training jointly across styles yields a generalized reasoning VLM that outperforms much larger open- and closed-source models.Our findings show that lightweight VLMs with verifiable reasoning achieve robust cross-style generalization while providing inspectable reasoning traces for multimodal tasks. The code and implementation are available at https://github.com/scheshmi/CrossStyle-MMR.
Anthology ID:
2026.findings-eacl.311
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5942–5956
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.311/
DOI:
Bibkey:
Cite (ACL):
Seyyed Saeid Cheshmi, Hahnemann Ortiz, James Mooney, and Dongyeop Kang. 2026. Reasoning Beyond Literal: Cross-style Multimodal Reasoning for Figurative Language Understanding. In Findings of the Association for Computational Linguistics: EACL 2026, pages 5942–5956, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Reasoning Beyond Literal: Cross-style Multimodal Reasoning for Figurative Language Understanding (Cheshmi et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.311.pdf
Checklist:
 2026.findings-eacl.311.checklist.pdf