Yicheng He
2023
Dataset Bias Mitigation in Multiple-Choice Visual Question Answering and Beyond
Zhecan Wang
|
Long Chen
|
Haoxuan You
|
Keyang Xu
|
Yicheng He
|
Wenhao Li
|
Noel Codella
|
Kai-Wei Chang
|
Shih-Fu Chang
Findings of the Association for Computational Linguistics: EMNLP 2023
Vision-language (VL) understanding tasks evaluate models’ comprehension of complex visual scenes through multiple-choice questions. However, we have identified two dataset biases that models can exploit as shortcuts to resolve various VL tasks correctly without proper understanding. The first type of dataset bias is Unbalanced Matching bias, where the correct answer overlaps the question and image more than the incorrect answers. The second type of dataset bias is Distractor Similarity bias, where incorrect answers are overly dissimilar to the correct answer but significantly similar to other incorrect answers within the same sample. To address these dataset biases, we first propose Adversarial Data Synthesis (ADS) to generate synthetic training and debiased evaluation data. We then introduce Intra-sample Counterfactual Training (ICT) to assist models in utilizing the synthesized training data, particularly the counterfactual data, via focusing on intra-sample differentiation. Extensive experiments demonstrate the effectiveness of ADS and ICT in consistently improving model performance across different benchmarks, even in domain-shifted scenarios.
2022
Understanding ME? Multimodal Evaluation for Fine-grained Visual Commonsense
Zhecan Wang
|
Haoxuan You
|
Yicheng He
|
Wenhao Li
|
Kai-Wei Chang
|
Shih-Fu Chang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Visual commonsense understanding requires Vision Language (VL) models to not only understand image and text but also cross-reference in-between to fully integrate and achieve comprehension of the visual scene described. Recently, various approaches have been developed and have achieved high performance on visual commonsense benchmarks. However, it is unclear whether the models really understand the visual scene and underlying commonsense knowledge due to limited evaluation data resources. To provide an in-depth analysis, we present a Multimodal Evaluation (ME) pipeline to automatically generate question-answer pairs to test models’ understanding of the visual scene, text, and related knowledge. We then take a step further to show that training with the ME data boosts the model’s performance in standard VCR evaluation. Lastly, our in-depth analysis and comparison reveal interesting findings: (1) semantically low-level information can assist the learning of high-level information but not the opposite; (2) visual information is generally under utilization compared with text.
Search
Co-authors
- Haoxuan You 2
- Kai-Wei Chang 2
- Keyang Xu 1
- Long Chen (陈龙) 1
- Noel Codella 1
- show all...