@inproceedings{post-duh-2019-jhu,
    title = "{JHU} 2019 Robustness Task System Description",
    author = "Post, Matt  and
      Duh, Kevin",
    editor = "Bojar, Ond{\v{r}}ej  and
      Chatterjee, Rajen  and
      Federmann, Christian  and
      Fishel, Mark  and
      Graham, Yvette  and
      Haddow, Barry  and
      Huck, Matthias  and
      Yepes, Antonio Jimeno  and
      Koehn, Philipp  and
      Martins, Andr{\'e}  and
      Monz, Christof  and
      Negri, Matteo  and
      N{\'e}v{\'e}ol, Aur{\'e}lie  and
      Neves, Mariana  and
      Post, Matt  and
      Turchi, Marco  and
      Verspoor, Karin",
    booktitle = "Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)",
    month = aug,
    year = "2019",
    address = "Florence, Italy",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/W19-5366/",
    doi = "10.18653/v1/W19-5366",
    pages = "552--558",
    abstract = "We describe the JHU submissions to the French{--}English, Japanese{--}English, and English{--}Japanese Robustness Task at WMT 2019. Our goal was to evaluate the performance of baseline systems on both the official noisy test set as well as news data, in order to ensure that performance gains in the latter did not come at the expense of general-domain performance. To this end, we built straightforward 6-layer Transformer models and experimented with a handful of variables including subword processing (FR{\textrightarrow}EN) and a handful of hyperparameters settings (JA{\ensuremath{\leftrightarrow}}EN). As expected, our systems performed reasonably."
}Markdown (Informal)
[JHU 2019 Robustness Task System Description](https://preview.aclanthology.org/iwcs-25-ingestion/W19-5366/) (Post & Duh, WMT 2019)
ACL
- Matt Post and Kevin Duh. 2019. JHU 2019 Robustness Task System Description. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 552–558, Florence, Italy. Association for Computational Linguistics.