Adversarial Evaluation of Multimodal Machine Translation

Desmond Elliott

[How to correct problems with metadata yourself]


Abstract
The promise of combining language and vision in multimodal machine translation is that systems will produce better translations by leveraging the image data. However, the evidence surrounding whether the images are useful is unconvincing due to inconsistencies between text-similarity metrics and human judgements. We present an adversarial evaluation to directly examine the utility of the image data in this task. Our evaluation tests whether systems perform better when paired with congruent images or incongruent images. This evaluation shows that only one out of three publicly available systems is sensitive to this perturbation of the data. We recommend that multimodal translation systems should be able to pass this sanity check in the future.
Anthology ID:
D18-1329
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2974–2978
Language:
URL:
https://aclanthology.org/D18-1329
DOI:
10.18653/v1/D18-1329
Bibkey:
Cite (ACL):
Desmond Elliott. 2018. Adversarial Evaluation of Multimodal Machine Translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2974–2978, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Adversarial Evaluation of Multimodal Machine Translation (Elliott, EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/teach-a-man-to-fish/D18-1329.pdf