BERTwich: Extending BERT’s Capabilities to Model Dialectal and Noisy Text

Aarohi Srivastava, David Chiang


Abstract
Real-world NLP applications often deal with nonstandard text (e.g., dialectal, informal, or misspelled text). However, language models like BERT deteriorate in the face of dialect variation or noise. How do we push BERT’s modeling capabilities to encompass nonstandard text? Fine-tuning helps, but it is designed for specializing a model to a task and does not seem to bring about the deeper, more pervasive changes needed to adapt a model to nonstandard language. In this paper, we introduce the novel idea of sandwiching BERT’s encoder stack between additional encoder layers trained to perform masked language modeling on noisy text. We find that our approach, paired with recent work on including character-level noise in fine-tuning data, can promote zero-shot transfer to dialectal text, as well as reduce the distance in the embedding space between words and their noisy counterparts.
Anthology ID:
2023.findings-emnlp.1037
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15510–15521
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.1037
DOI:
10.18653/v1/2023.findings-emnlp.1037
Bibkey:
Cite (ACL):
Aarohi Srivastava and David Chiang. 2023. BERTwich: Extending BERT’s Capabilities to Model Dialectal and Noisy Text. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15510–15521, Singapore. Association for Computational Linguistics.
Cite (Informal):
BERTwich: Extending BERT’s Capabilities to Model Dialectal and Noisy Text (Srivastava & Chiang, Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.findings-emnlp.1037.pdf