@inproceedings{hamed-etal-2023-investigating,
    title = "Investigating Lexical Replacements for {A}rabic-{E}nglish Code-Switched Data Augmentation",
    author = "Hamed, Injy  and
      Habash, Nizar  and
      Abdennadher, Slim  and
      Vu, Ngoc Thang",
    editor = "Ojha, Atul Kr.  and
      Liu, Chao-hong  and
      Vylomova, Ekaterina  and
      Pirinen, Flammie  and
      Abbott, Jade  and
      Washington, Jonathan  and
      Oco, Nathaniel  and
      Malykh, Valentin  and
      Logacheva, Varvara  and
      Zhao, Xiaobing",
    booktitle = "Proceedings of the Sixth Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2023)",
    month = may,
    year = "2023",
    address = "Dubrovnik, Croatia",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.loresmt-1.7/",
    doi = "10.18653/v1/2023.loresmt-1.7",
    pages = "86--100",
    abstract = "Data sparsity is a main problem hindering the development of code-switching (CS) NLP systems. In this paper, we investigate data augmentation techniques for synthesizing dialectal Arabic-English CS text. We perform lexical replacements using word-aligned parallel corpora where CS points are either randomly chosen or learnt using a sequence-to-sequence model. We compare these approaches against dictionary-based replacements. We assess the quality of generated sentences through human evaluation and evaluate the effectiveness of data augmentation on machine translation (MT), automatic speech recognition (ASR), and speech translation (ST) tasks. Results show that using a predictive model results in more natural CS sentences compared to the random approach, as reported in human judgements. In the downstream tasks, despite the random approach generating more data, both approaches perform equally (outperforming dictionary-based replacements). Overall, data augmentation achieves 34{\%} improvement in perplexity, 5.2{\%} relative improvement on WER for ASR task, +4.0-5.1 BLEU points on MT task, and +2.1-2.2 BLEU points on ST over a baseline trained on available data without augmentation."
}Markdown (Informal)
[Investigating Lexical Replacements for Arabic-English Code-Switched Data Augmentation](https://preview.aclanthology.org/ingest-emnlp/2023.loresmt-1.7/) (Hamed et al., LoResMT 2023)
ACL