A Review of Human Evaluation for Style Transfer
Eleftheria Briakou, Sweta Agrawal, Ke Zhang, Joel Tetreault, Marine Carpuat
Abstract
This paper reviews and summarizes human evaluation practices described in 97 style transfer papers with respect to three main evaluation aspects: style transfer, meaning preservation, and fluency. In principle, evaluations by human raters should be the most reliable. However, in style transfer papers, we find that protocols for human evaluations are often underspecified and not standardized, which hampers the reproducibility of research in this field and progress toward better human and automatic evaluation methods.- Anthology ID:
- 2021.gem-1.6
- Volume:
- Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Venue:
- GEM
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 58–67
- Language:
- URL:
- https://aclanthology.org/2021.gem-1.6
- DOI:
- 10.18653/v1/2021.gem-1.6
- Cite (ACL):
- Eleftheria Briakou, Sweta Agrawal, Ke Zhang, Joel Tetreault, and Marine Carpuat. 2021. A Review of Human Evaluation for Style Transfer. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 58–67, Online. Association for Computational Linguistics.
- Cite (Informal):
- A Review of Human Evaluation for Style Transfer (Briakou et al., GEM 2021)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2021.gem-1.6.pdf
- Code
- Elbria/ST-human-review