Denoising Large-Scale Image Captioning from Alt-text Data Using Content Selection Models

Khyathi Raghavi Chandu, Piyush Sharma, Soravit Changpinyo, Ashish V. Thapliyal, Radu Soricut


Abstract
Training large-scale image captioning (IC) models demands access to a rich and diverse set of training examples that are expensive to curate both in terms of time and man-power. Instead, alt-text based captions gathered from the web is a far cheaper alternative to scale with the downside of being noisy. Recent modeling approaches to IC often fall short in terms of performance in leveraging these noisy datasets in favor of clean annotations. We address this problem with a simple yet effective technique of breaking down the task into two smaller, more controllable tasks – skeleton prediction and skeleton-based caption generation. Specifically, we show that sub-selecting content words as skeletons helps in generating improved and denoised captions when leveraging rich yet noisy alt-text–based uncurated datasets. We also show that the predicted English skeletons can further cross-lingually be leveraged to generate non-English captions, and present experimental results covering caption generation in French, Italian, German, Spanish and Hindi. We also show that skeleton-based prediction allows for better control of certain caption properties, such as length, content, and gender expression, providing a handle to perform human-in-the-loop interpretable semi-automatic corrections.
Anthology ID:
2022.coling-1.532
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
6089–6104
Language:
URL:
https://aclanthology.org/2022.coling-1.532
DOI:
Bibkey:
Cite (ACL):
Khyathi Raghavi Chandu, Piyush Sharma, Soravit Changpinyo, Ashish V. Thapliyal, and Radu Soricut. 2022. Denoising Large-Scale Image Captioning from Alt-text Data Using Content Selection Models. In Proceedings of the 29th International Conference on Computational Linguistics, pages 6089–6104, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Denoising Large-Scale Image Captioning from Alt-text Data Using Content Selection Models (Chandu et al., COLING 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2022.coling-1.532.pdf
Data
Conceptual CaptionsMS COCO