Are Multimodal Large Language Models Pragmatically Competent Listeners in Simple Reference Resolution Tasks?
Simeon Junker, Manar Ali, Larissa Koch, Sina Zarrieß, Hendrik Buschmeier
Abstract
We investigate the linguistic abilities of multimodal large language models in reference resolution tasks featuring simple yet abstract visual stimuli, such as color patches and color grids. Although the task may not seem challenging for today’s language models, being straightforward for human dyads, we consider it to be a highly relevant probe of the pragmatic capabilities of MLLMs. Our results and analyses indeed suggest that basic pragmatic capabilities, such as context-dependent interpretation of color descriptions, still constitute major challenges for state-of-the-art MLLMs.- Anthology ID:
- 2025.findings-acl.1236
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2025
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
- Venues:
- Findings | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 24101–24109
- Language:
- URL:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.findings-acl.1236/
- DOI:
- Cite (ACL):
- Simeon Junker, Manar Ali, Larissa Koch, Sina Zarrieß, and Hendrik Buschmeier. 2025. Are Multimodal Large Language Models Pragmatically Competent Listeners in Simple Reference Resolution Tasks?. In Findings of the Association for Computational Linguistics: ACL 2025, pages 24101–24109, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- Are Multimodal Large Language Models Pragmatically Competent Listeners in Simple Reference Resolution Tasks? (Junker et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.findings-acl.1236.pdf