Towards Better Generalization in Open-Domain Question Answering by Mitigating Context Memorization
Zixuan Zhang, Revanth Gangi Reddy, Kevin Small, Tong Zhang, Heng Ji
Abstract
Open-domain Question Answering (OpenQA) aims at answering factual questions with an external large-scale knowledge corpus. However, real-world knowledge is not static; it updates and evolves continually. Such a dynamic characteristic of knowledge poses a vital challenge for these models, as the trained models need to constantly adapt to the latest information to make sure that the answers remain accurate. In addition, it is still unclear how well an OpenQA model can transfer to completely new knowledge domains. In this paper, we investigate the generalization performance of a retrieval-augmented QA model in two specific scenarios: 1) adapting to updated versions of the same knowledge corpus; 2) switching to completely different knowledge domains. We observe that the generalization challenges of OpenQA models stem from the reader’s over-reliance on memorizing the knowledge from the external corpus, which hinders the model from generalizing to a new knowledge corpus. We introduce Corpus-Invariant Tuning (CIT), a simple but effective training strategy, to mitigate the knowledge over-memorization by controlling the likelihood of retrieved contexts during training. Extensive experimental results on multiple OpenQA benchmarks show that CIT achieves significantly better generalizability without compromising the model’s performance in its original corpus and domain.- Anthology ID:
- 2024.findings-naacl.48
- Volume:
- Findings of the Association for Computational Linguistics: NAACL 2024
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Kevin Duh, Helena Gomez, Steven Bethard
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 742–753
- Language:
- URL:
- https://aclanthology.org/2024.findings-naacl.48
- DOI:
- Cite (ACL):
- Zixuan Zhang, Revanth Gangi Reddy, Kevin Small, Tong Zhang, and Heng Ji. 2024. Towards Better Generalization in Open-Domain Question Answering by Mitigating Context Memorization. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 742–753, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Towards Better Generalization in Open-Domain Question Answering by Mitigating Context Memorization (Zhang et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/naacl24-info/2024.findings-naacl.48.pdf