Assessing Combinational Generalization of Language Models in Biased Scenarios

Yanbo Fang, Zuohui Fu, Xin Dong, Yongfeng Zhang, Gerard de Melo


Abstract
In light of the prominence of Pre-trained Language Models (PLMs) across numerous downstream tasks, shedding light on what they learn is an important endeavor. Whereas previous work focuses on assessing in-domain knowledge, we evaluate the generalization ability in biased scenarios through component combinations where it could be easy for the PLMs to learn shortcuts from the training corpus. This would lead to poor performance on the testing corpus, which is combinationally reconstructed from the training components. The results show that PLMs are able to overcome such distribution shifts for specific tasks and with sufficient data. We further find that overfitting can lead the models to depend more on biases for prediction, thus hurting the combinational generalization ability of PLMs.
Anthology ID:
2022.aacl-short.48
Volume:
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
November
Year:
2022
Address:
Online only
Editors:
Yulan He, Heng Ji, Sujian Li, Yang Liu, Chua-Hui Chang
Venues:
AACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
392–397
Language:
URL:
https://aclanthology.org/2022.aacl-short.48
DOI:
Bibkey:
Cite (ACL):
Yanbo Fang, Zuohui Fu, Xin Dong, Yongfeng Zhang, and Gerard de Melo. 2022. Assessing Combinational Generalization of Language Models in Biased Scenarios. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 392–397, Online only. Association for Computational Linguistics.
Cite (Informal):
Assessing Combinational Generalization of Language Models in Biased Scenarios (Fang et al., AACL-IJCNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2022.aacl-short.48.pdf