Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained Models

Zhengxuan Wu, Nelson F. Liu, Christopher Potts


Abstract
There is growing evidence that pretrained language models improve task-specific fine-tuning even where the task examples are radically different from those seen in training. We study an extreme case of transfer learning by providing a systematic exploration of how much transfer occurs when models are denied any information about word identity via random scrambling. In four classification tasks and two sequence labeling tasks, we evaluate LSTMs using GloVe embeddings, BERT, and baseline models. Among these models, we find that only BERT shows high rates of transfer into our scrambled domains, and for classification but not sequence labeling tasks. Our analyses seek to explain why transfer succeeds for some tasks but not others, to isolate the separate contributions of pretraining versus fine-tuning, to show that the fine-tuning process is not merely learning to unscramble the scrambled inputs, and to quantify the role of word frequency. Furthermore, our results suggest that current benchmarks may overestimate the degree to which current models actually understand language.
Anthology ID:
2022.repl4nlp-1.11
Volume:
Proceedings of the 7th Workshop on Representation Learning for NLP
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
RepL4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
100–110
Language:
URL:
https://aclanthology.org/2022.repl4nlp-1.11
DOI:
10.18653/v1/2022.repl4nlp-1.11
Bibkey:
Cite (ACL):
Zhengxuan Wu, Nelson F. Liu, and Christopher Potts. 2022. Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained Models. In Proceedings of the 7th Workshop on Representation Learning for NLP, pages 100–110, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Identifying the Limits of Cross-Domain Knowledge Transfer for Pretrained Models (Wu et al., RepL4NLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.repl4nlp-1.11.pdf
Video:
 https://preview.aclanthology.org/ingestion-script-update/2022.repl4nlp-1.11.mp4
Code
 frankaging/limits-cross-domain-transfer
Data
CoNLL-2003GLUEMRPCQNLISNLISST