CIDEr-R: Robust Consensus-based Image Description Evaluation
Gabriel Oliveira dos Santos, Esther Luna Colombini, Sandra Avila
Abstract
This paper shows that CIDEr-D, a traditional evaluation metric for image description, does not work properly on datasets where the number of words in the sentence is significantly greater than those in the MS COCO Captions dataset. We also show that CIDEr-D has performance hampered by the lack of multiple reference sentences and high variance of sentence length. To bypass this problem, we introduce CIDEr-R, which improves CIDEr-D, making it more flexible in dealing with datasets with high sentence length variance. We demonstrate that CIDEr-R is more accurate and closer to human judgment than CIDEr-D; CIDEr-R is more robust regarding the number of available references. Our results reveal that using Self-Critical Sequence Training to optimize CIDEr-R generates descriptive captions. In contrast, when CIDEr-D is optimized, the generated captions’ length tends to be similar to the reference length. However, the models also repeat several times the same word to increase the sentence length.- Anthology ID:
- 2021.wnut-1.39
- Volume:
- Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
- Month:
- November
- Year:
- 2021
- Address:
- Online
- Venue:
- WNUT
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 351–360
- Language:
- URL:
- https://aclanthology.org/2021.wnut-1.39
- DOI:
- 10.18653/v1/2021.wnut-1.39
- Cite (ACL):
- Gabriel Oliveira dos Santos, Esther Luna Colombini, and Sandra Avila. 2021. CIDEr-R: Robust Consensus-based Image Description Evaluation. In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 351–360, Online. Association for Computational Linguistics.
- Cite (Informal):
- CIDEr-R: Robust Consensus-based Image Description Evaluation (Oliveira dos Santos et al., WNUT 2021)
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/2021.wnut-1.39.pdf
- Code
- ruotianluo/coco-caption
- Data
- COCO, COCO Captions