Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems

Wang Zhu, Jesse Thomason, Robin Jia


Abstract
For vision-and-language reasoning tasks, both fully connectionist, end-to-end methods and hybrid, neuro-symbolic methods have achieved high in-distribution performance. In which out-of-distribution settings does each paradigm excel? We investigate this question on both single-image and multi-image visual question-answering through four types of generalization tests: a novel segment-combine test for multi-image queries, contrast set, compositional generalization, and cross-benchmark transfer.Vision-and-language end-to-end trained systems exhibit sizeable performance drops across all these tests. Neuro-symbolic methods suffer even more on cross-benchmark transfer from GQA to VQA, but they show smaller accuracy drops on the other generalization tests and their performance quickly improves by few-shot training. Overall, our results demonstrate the complementary benefits of these two paradigms, and emphasize the importance of using a diverse suite of generalization tests to fully characterize model robustness to distribution shift.
Anthology ID:
2022.findings-emnlp.345
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4697–4711
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.345
DOI:
Bibkey:
Cite (ACL):
Wang Zhu, Jesse Thomason, and Robin Jia. 2022. Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4697–4711, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems (Zhu et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.findings-emnlp.345.pdf