Anas Awadalla
2022
Exploring The Landscape of Distributional Robustness for Question Answering Models
Anas Awadalla
|
Mitchell Wortsman
|
Gabriel Ilharco
|
Sewon Min
|
Ian Magnusson
|
Hannaneh Hajishirzi
|
Ludwig Schmidt
Findings of the Association for Computational Linguistics: EMNLP 2022
We conduct a large empirical evaluation to investigate the landscape of distributional robustness in question answering. Our investigation spans over 350 models and 16 question answering datasets, including a diverse set of architectures, model sizes, and adaptation methods (e.g., fine-tuning, adapter tuning, in-context learning, etc.). We find that, in many cases, model variations do not affect robustness and in-distribution performance alone determines out-of-distribution performance.Moreover, our findings indicate thati) zero-shot and in-context learning methods are more robust to distribution shifts than fully fine-tuned models;ii) few-shot prompt fine-tuned models exhibit better robustness than few-shot fine-tuned span prediction models;iii) parameter-efficient and robustness enhancing training methods provide no significant robustness improvements.In addition, we publicly release all evaluations to encourage researchers to further analyze robustness trends for question answering models.
Search
Co-authors
- Mitchell Wortsman 1
- Gabriel Ilharco 1
- Sewon Min 1
- Ian Magnusson 1
- Hannaneh Hajishirzi 1
- show all...