ViLBERTScore: Evaluating Image Caption Using Vision-and-Language BERT
Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Kyomin Jung
Abstract
In this paper, we propose an evaluation metric for image captioning systems using both image and text information. Unlike the previous methods that rely on textual representations in evaluating the caption, our approach uses visiolinguistic representations. The proposed method generates image-conditioned embeddings for each token using ViLBERT from both generated and reference texts. Then, these contextual embeddings from each of the two sentence-pair are compared to compute the similarity score. Experimental results on three benchmark datasets show that our method correlates significantly better with human judgments than all existing metrics.- Anthology ID:
- 2020.eval4nlp-1.4
- Volume:
- Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Steffen Eger, Yang Gao, Maxime Peyrard, Wei Zhao, Eduard Hovy
- Venue:
- Eval4NLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 34–39
- Language:
- URL:
- https://aclanthology.org/2020.eval4nlp-1.4
- DOI:
- 10.18653/v1/2020.eval4nlp-1.4
- Cite (ACL):
- Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, and Kyomin Jung. 2020. ViLBERTScore: Evaluating Image Caption Using Vision-and-Language BERT. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 34–39, Online. Association for Computational Linguistics.
- Cite (Informal):
- ViLBERTScore: Evaluating Image Caption Using Vision-and-Language BERT (Lee et al., Eval4NLP 2020)
- PDF:
- https://preview.aclanthology.org/ml4al-ingestion/2020.eval4nlp-1.4.pdf
- Code
- hwanheelee1993/vilbertscore