@inproceedings{eksi-etal-2021-explaining,
    title = "Explaining Errors in Machine Translation with Absolute Gradient Ensembles",
    author = "Eksi, Melda  and
      Gelbing, Erik  and
      Stieber, Jonathan  and
      Vu, Chi Viet",
    editor = "Gao, Yang  and
      Eger, Steffen  and
      Zhao, Wei  and
      Lertvittayakumjorn, Piyawat  and
      Fomicheva, Marina",
    booktitle = "Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems",
    month = nov,
    year = "2021",
    address = "Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2021.eval4nlp-1.23/",
    doi = "10.18653/v1/2021.eval4nlp-1.23",
    pages = "238--249",
    abstract = "Current research on quality estimation of machine translation focuses on the sentence-level quality of the translations. By using explainability methods, we can use these quality estimations for word-level error identification. In this work, we compare different explainability techniques and investigate gradient-based and perturbation-based methods by measuring their performance and required computational efforts. Throughout our experiments, we observed that using absolute word scores boosts the performance of gradient-based explainers significantly. Further, we combine explainability methods to ensembles to exploit the strengths of individual explainers to get better explanations. We propose the usage of absolute gradient-based methods. These work comparably well to popular perturbation-based ones while being more time-efficient."
}Markdown (Informal)
[Explaining Errors in Machine Translation with Absolute Gradient Ensembles](https://preview.aclanthology.org/ingest-emnlp/2021.eval4nlp-1.23/) (Eksi et al., Eval4NLP 2021)
ACL