@inproceedings{przystupa-abdul-mageed-2019-neural,
    title = "Neural Machine Translation of Low-Resource and Similar Languages with Backtranslation",
    author = "Przystupa, Michael  and
      Abdul-Mageed, Muhammad",
    editor = "Bojar, Ond{\v{r}}ej  and
      Chatterjee, Rajen  and
      Federmann, Christian  and
      Fishel, Mark  and
      Graham, Yvette  and
      Haddow, Barry  and
      Huck, Matthias  and
      Yepes, Antonio Jimeno  and
      Koehn, Philipp  and
      Martins, Andr{\'e}  and
      Monz, Christof  and
      Negri, Matteo  and
      N{\'e}v{\'e}ol, Aur{\'e}lie  and
      Neves, Mariana  and
      Post, Matt  and
      Turchi, Marco  and
      Verspoor, Karin",
    booktitle = "Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)",
    month = aug,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/W19-5431/",
    doi = "10.18653/v1/W19-5431",
    pages = "224--235",
    abstract = "We present our contribution to the WMT19 Similar Language Translation shared task. We investigate the utility of neural machine translation on three low-resource, similar language pairs: Spanish {--} Portuguese, Czech {--} Polish, and Hindi {--} Nepali. Since state-of-the-art neural machine translation systems still require large amounts of bitext, which we do not have for the pairs we consider, we focus primarily on incorporating monolingual data into our models with backtranslation. In our analysis, we found Transformer models to work best on Spanish {--} Portuguese and Czech {--} Polish translation, whereas LSTMs with global attention worked best on Hindi {--} Nepali translation."
}Markdown (Informal)
[Neural Machine Translation of Low-Resource and Similar Languages with Backtranslation](https://preview.aclanthology.org/iwcs-25-ingestion/W19-5431/) (Przystupa & Abdul-Mageed, WMT 2019)
ACL