JaSPICE: Automatic Evaluation Metric Using Predicate-Argument Structures for Image Captioning Models

Yuiga Wada, Kanta Kaneda, Komei Sugiura


Abstract
Image captioning studies heavily rely on automatic evaluation metrics such as BLEU and METEOR. However, such n-gram-based metrics have been shown to correlate poorly with human evaluation, leading to the proposal of alternative metrics such as SPICE for English; however, no equivalent metrics have been established for other languages. Therefore, in this study, we propose an automatic evaluation metric called JaSPICE, which evaluates Japanese captions based on scene graphs. The proposed method generates a scene graph from dependencies and the predicate-argument structure, and extends the graph using synonyms. We conducted experiments employing 10 image captioning models trained on STAIR Captions and PFN-PIC and constructed the Shichimi dataset, which contains 103,170 human evaluations. The results showed that our metric outperformed the baseline metrics for the correlation coefficient with the human evaluation.
Anthology ID:
2023.conll-1.28
Volume:
Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)
Month:
December
Year:
2023
Address:
Singapore
Editors:
Jing Jiang, David Reitter, Shumin Deng
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
424–435
Language:
URL:
https://aclanthology.org/2023.conll-1.28
DOI:
10.18653/v1/2023.conll-1.28
Bibkey:
Cite (ACL):
Yuiga Wada, Kanta Kaneda, and Komei Sugiura. 2023. JaSPICE: Automatic Evaluation Metric Using Predicate-Argument Structures for Image Captioning Models. In Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL), pages 424–435, Singapore. Association for Computational Linguistics.
Cite (Informal):
JaSPICE: Automatic Evaluation Metric Using Predicate-Argument Structures for Image Captioning Models (Wada et al., CoNLL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.conll-1.28.pdf