Snapshot-Guided Domain Adaptation for ELECTRA

Daixuan Cheng, Shaohan Huang, Jianfeng Liu, Yuefeng Zhan, Hao Sun, Furu Wei, Denvy Deng, Qi Zhang


Abstract
Discriminative pre-trained language models, such as ELECTRA, have achieved promising performances in a variety of general tasks. However, these generic pre-trained models struggle to capture domain-specific knowledge of domain-related tasks. In this work, we propose a novel domain-adaptation method for ELECTRA, which can dynamically select domain-specific tokens and guide the discriminator to emphasize them, without introducing new training parameters. We show that by re-weighting the losses of domain-specific tokens, ELECTRA can be effectively adapted to different domains. The experimental results in both computer science and biomedical domains show that the proposed method can achieve state-of-the-art results on the domain-related tasks.
Anthology ID:
2022.findings-emnlp.163
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2226–2232
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.163
DOI:
10.18653/v1/2022.findings-emnlp.163
Bibkey:
Cite (ACL):
Daixuan Cheng, Shaohan Huang, Jianfeng Liu, Yuefeng Zhan, Hao Sun, Furu Wei, Denvy Deng, and Qi Zhang. 2022. Snapshot-Guided Domain Adaptation for ELECTRA. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2226–2232, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Snapshot-Guided Domain Adaptation for ELECTRA (Cheng et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2022.findings-emnlp.163.pdf