Abstract
Recent advances in automatic evaluation metrics for text have shown that deep contextualized word representations, such as those generated by BERT encoders, are helpful for designing metrics that correlate well with human judgements. At the same time, it has been argued that contextualized word representations exhibit sub-optimal statistical properties for encoding the true similarity between words or sentences. In this paper, we present two techniques for improving encoding representations for similarity metrics: a batch-mean centering strategy that improves statistical properties; and a computationally efficient tempered Word Mover Distance, for better fusion of the information in the contextualized word representations. We conduct numerical experiments that demonstrate the robustness of our techniques, reporting results over various BERT-backbone learned metrics and achieving state of the art correlation with human ratings on several benchmarks.- Anthology ID:
- 2020.eval4nlp-1.6
- Volume:
- Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Venue:
- Eval4NLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 51–59
- Language:
- URL:
- https://aclanthology.org/2020.eval4nlp-1.6
- DOI:
- 10.18653/v1/2020.eval4nlp-1.6
- Cite (ACL):
- Xi Chen, Nan Ding, Tomer Levinboim, and Radu Soricut. 2020. Improving Text Generation Evaluation with Batch Centering and Tempered Word Mover Distance. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 51–59, Online. Association for Computational Linguistics.
- Cite (Informal):
- Improving Text Generation Evaluation with Batch Centering and Tempered Word Mover Distance (Chen et al., Eval4NLP 2020)
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/2020.eval4nlp-1.6.pdf