Abstract
Machine translation of user-generated code-mixed inputs to English is of crucial importance in applications like web search and targeted advertising. We address the scarcity of parallel training data for training such models by designing a strategy of converting existing non-code-mixed parallel data sources to code-mixed parallel data. We present an m-BERT based procedure whose core learnable component is a ternary sequence labeling model, that can be trained with a limited code-mixed corpus alone. We show a 5.8 point increase in BLEU on heavily code-mixed sentences by training a translation model using our data augmentation strategy on an Hindi-English code-mixed translation task.- Anthology ID:
- 2021.naacl-main.459
- Volume:
- Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
- Month:
- June
- Year:
- 2021
- Address:
- Online
- Editors:
- Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5760–5766
- Language:
- URL:
- https://preview.aclanthology.org/icon-24-ingestion/2021.naacl-main.459/
- DOI:
- 10.18653/v1/2021.naacl-main.459
- Cite (ACL):
- Abhirut Gupta, Aditya Vavre, and Sunita Sarawagi. 2021. Training Data Augmentation for Code-Mixed Translation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5760–5766, Online. Association for Computational Linguistics.
- Cite (Informal):
- Training Data Augmentation for Code-Mixed Translation (Gupta et al., NAACL 2021)
- PDF:
- https://preview.aclanthology.org/icon-24-ingestion/2021.naacl-main.459.pdf
- Code
- shruikan20/spoken-tutorial-dataset