Komei Sugiura
2025
VELA: An LLM-Hybrid-as-a-Judge Approach for Evaluating Long Image Captions
Kazuki Matsuda
|
Yuiga Wada
|
Shinnosuke Hirano
|
Seitaro Otsuki
|
Komei Sugiura
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
In this study, we focus on the automatic evaluation of long and detailed image captions generated by multimodal Large Language Models (MLLMs). Most existing automatic evaluation metrics for image captioning are primarily designed for short captions and are not suitable for evaluating long captions. Moreover, recent LLM-as-a-Judge approaches suffer from slow inference due to their reliance on autoregressive inference and early fusion of visual information. To address these limitations, we propose VELA, an automatic evaluation metric for long captions developed within a novel LLM-Hybrid-as-a-Judge framework. Furthermore, we propose LongCap-Arena, a benchmark specifically designed for evaluating metrics for long captions. This benchmark comprises 7,805 images, the corresponding human-provided long reference captions and long candidate captions, and 32,246 human judgments from three distinct perspectives: Descriptiveness, Relevance, and Fluency. We demonstrated that VELA outperformed existing metrics and achieved superhuman performance on LongCap-Arena.
2023
JaSPICE: Automatic Evaluation Metric Using Predicate-Argument Structures for Image Captioning Models
Yuiga Wada
|
Kanta Kaneda
|
Komei Sugiura
Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)
Image captioning studies heavily rely on automatic evaluation metrics such as BLEU and METEOR. However, such n-gram-based metrics have been shown to correlate poorly with human evaluation, leading to the proposal of alternative metrics such as SPICE for English; however, no equivalent metrics have been established for other languages. Therefore, in this study, we propose an automatic evaluation metric called JaSPICE, which evaluates Japanese captions based on scene graphs. The proposed method generates a scene graph from dependencies and the predicate-argument structure, and extends the graph using synonyms. We conducted experiments employing 10 image captioning models trained on STAIR Captions and PFN-PIC and constructed the Shichimi dataset, which contains 103,170 human evaluations. The results showed that our metric outperformed the baseline metrics for the correlation coefficient with the human evaluation.
2010
Modeling Spoken Decision Making Dialogue and Optimization of its Dialogue Strategy
Teruhisa Misu
|
Komei Sugiura
|
Kiyonori Ohtake
|
Chiori Hori
|
Hideki Kashioka
|
Hisashi Kawai
|
Satoshi Nakamura
Proceedings of the SIGDIAL 2010 Conference
Search
Fix author
Co-authors
- Yuiga Wada 2
- Shinnosuke Hirano 1
- Chiori Hori 1
- Kanta Kaneda 1
- Hideki Kashioka 1
- show all...