Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Hongqiu Wu, Ruixue Ding, Hai Zhao, Boli Chen, Pengjun Xie, Fei Huang, Min Zhang
Abstract
Multiple pre-training objectives fill the vacancy of the understanding capability of single-objective language modeling, which serves the ultimate purpose of pre-trained language models (PrLMs), generalizing well on a mass of scenarios. However, learning multiple training objectives in a single model is challenging due to the unknown relative significance as well as the potential contrariety between them. Empirical studies have shown that the current objective sampling in an ad-hoc manual setting makes the learned language representation barely converge to the desired optimum. Thus, we propose MOMETAS, a novel adaptive sampler based on meta-learning, which learns the latent sampling pattern on arbitrary pre-training objectives. Such a design is lightweight with negligible additional training overhead. To validate our approach, we adopt five objectives and conduct continual pre-training with BERT-base and BERT-large models, where MOMETAS demonstrates universal performance gain over other rule-based sampling strategies on 14 natural language processing tasks.- Anthology ID:
- 2022.findings-emnlp.482
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2022
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6454–6466
- Language:
- URL:
- https://aclanthology.org/2022.findings-emnlp.482
- DOI:
- 10.18653/v1/2022.findings-emnlp.482
- Cite (ACL):
- Hongqiu Wu, Ruixue Ding, Hai Zhao, Boli Chen, Pengjun Xie, Fei Huang, and Min Zhang. 2022. Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6454–6466, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning (Wu et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/landing_page/2022.findings-emnlp.482.pdf