VELA: An LLM-Hybrid-as-a-Judge Approach for Evaluating Long Image Captions

Kazuki Matsuda, Yuiga Wada, Shinnosuke Hirano, Seitaro Otsuki, Komei Sugiura


Abstract
In this study, we focus on the automatic evaluation of long and detailed image captions generated by multimodal Large Language Models (MLLMs). Most existing automatic evaluation metrics for image captioning are primarily designed for short captions and are not suitable for evaluating long captions. Moreover, recent LLM-as-a-Judge approaches suffer from slow inference due to their reliance on autoregressive inference and early fusion of visual information. To address these limitations, we propose VELA, an automatic evaluation metric for long captions developed within a novel LLM-Hybrid-as-a-Judge framework. Furthermore, we propose LongCap-Arena, a benchmark specifically designed for evaluating metrics for long captions. This benchmark comprises 7,805 images, the corresponding human-provided long reference captions and long candidate captions, and 32,246 human judgments from three distinct perspectives: Descriptiveness, Relevance, and Fluency. We demonstrated that VELA outperformed existing metrics and achieved superhuman performance on LongCap-Arena.
Anthology ID:
2025.emnlp-main.438
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8691–8707
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.438/
DOI:
Bibkey:
Cite (ACL):
Kazuki Matsuda, Yuiga Wada, Shinnosuke Hirano, Seitaro Otsuki, and Komei Sugiura. 2025. VELA: An LLM-Hybrid-as-a-Judge Approach for Evaluating Long Image Captions. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 8691–8707, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
VELA: An LLM-Hybrid-as-a-Judge Approach for Evaluating Long Image Captions (Matsuda et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.438.pdf
Checklist:
 2025.emnlp-main.438.checklist.pdf