@inproceedings{chinea-rios-etal-2018-automatic,
    title = "Are Automatic Metrics Robust and Reliable in Specific Machine Translation Tasks?",
    author = "Chinea-Rios, Mara  and
      Peris, Alvaro  and
      Casacuberta, Francisco",
    editor = "P{\'e}rez-Ortiz, Juan Antonio  and
      S{\'a}nchez-Mart{\'i}nez, Felipe  and
      Espl{\`a}-Gomis, Miquel  and
      Popovi{\'c}, Maja  and
      Rico, Celia  and
      Martins, Andr{\'e}  and
      Van den Bogaert, Joachim  and
      Forcada, Mikel L.",
    booktitle = "Proceedings of the 21st Annual Conference of the European Association for Machine Translation",
    month = may,
    year = "2018",
    address = "Alicante, Spain",
    url = "https://preview.aclanthology.org/ingest-emnlp/2018.eamt-main.9/",
    pages = "109--118",
    abstract = "We present a comparison of automatic metrics against human evaluations of translation quality in several scenarios which were unexplored up to now. Our experimentation was conducted on translation hypotheses that were problematic for the automatic metrics, as the results greatly diverged from one metric to another. We also compared three different translation technologies. Our evaluation shows that in most cases, the metrics capture the human criteria. However, we face failures of the automatic metrics when applied to some domains and systems. Interestingly, we find that automatic metrics applied to the neural machine translation hypotheses provide the most reliable results. Finally, we provide some advice when dealing with these problematic domains."
}Markdown (Informal)
[Are Automatic Metrics Robust and Reliable in Specific Machine Translation Tasks?](https://preview.aclanthology.org/ingest-emnlp/2018.eamt-main.9/) (Chinea-Rios et al., EAMT 2018)
ACL