Abstract
Masked Language Modeling (MLM) has been widely used as the denoising objective in pre-training language models (PrLMs). Existing PrLMs commonly adopt a Random-Token Masking strategy where a fixed masking ratio is applied and different contents are masked by an equal probability throughout the entire training. However, the model may receive complicated impact from pre-training status, which changes accordingly as training time goes on. In this paper, we show that such time-invariant MLM settings on masking ratio and masked content are unlikely to deliver an optimal outcome, which motivates us to explore the influence of time-variant MLM settings. We propose two scheduled masking approaches that adaptively tune the masking ratio and masked content in different training stages, which improves the pre-training efficiency and effectiveness verified on the downstream tasks. Our work is a pioneer study on time-variant masking strategy on ratio and content and gives a better understanding of how masking ratio and masked content influence the MLM pre-training.- Anthology ID:
- 2023.acl-long.400
- Volume:
- Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7255–7267
- Language:
- URL:
- https://aclanthology.org/2023.acl-long.400
- DOI:
- 10.18653/v1/2023.acl-long.400
- Cite (ACL):
- Dongjie Yang, Zhuosheng Zhang, and Hai Zhao. 2023. Learning Better Masking for Better Language Model Pre-training. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7255–7267, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Learning Better Masking for Better Language Model Pre-training (Yang et al., ACL 2023)
- PDF:
- https://preview.aclanthology.org/ingest-bitext-workshop/2023.acl-long.400.pdf