“Is Whole Word Masking Always Better for Chinese BERT?”: Probing on Chinese Grammatical Error Correction
Yong Dai, Linyang Li, Cong Zhou, Zhangyin Feng, Enbo Zhao, Xipeng Qiu, Piji Li, Duyu Tang
Abstract
Whole word masking (WWM), which masks all subwords corresponding to a word at once, makes a better English BERT model. For the Chinese language, however, there is no subword because each token is an atomic character. The meaning of a word in Chinese is different in that a word is a compositional unit consisting of multiple characters. Such difference motivates us to investigate whether WWM leads to better context understanding ability for Chinese BERT. To achieve this, we introduce two probing tasks related to grammatical error correction and ask pretrained models to revise or insert tokens in a masked language modeling manner. We construct a dataset including labels for 19,075 tokens in 10,448 sentences. We train three Chinese BERT models with standard character-level masking (CLM), WWM, and a combination of CLM and WWM, respectively. Our major findings are as follows: First, when one character needs to be inserted or replaced, the model trained with CLM performs the best. Second, when more than one character needs to be handled, WWM is the key to better performance. Finally, when being fine-tuned on sentence-level downstream tasks, models trained with different masking strategies perform comparably.- Anthology ID:
- 2022.findings-acl.1
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2022
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1–8
- Language:
- URL:
- https://aclanthology.org/2022.findings-acl.1
- DOI:
- 10.18653/v1/2022.findings-acl.1
- Cite (ACL):
- Yong Dai, Linyang Li, Cong Zhou, Zhangyin Feng, Enbo Zhao, Xipeng Qiu, Piji Li, and Duyu Tang. 2022. “Is Whole Word Masking Always Better for Chinese BERT?”: Probing on Chinese Grammatical Error Correction. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1–8, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- “Is Whole Word Masking Always Better for Chinese BERT?”: Probing on Chinese Grammatical Error Correction (Dai et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/naacl24-info/2022.findings-acl.1.pdf