Intrinsic Task-based Evaluation for Referring Expression Generation

Guanyi Chen, Fahime Same, Kees Van Deemter


Abstract
Recently, a human evaluation study of Referring Expression Generation (REG) models had an unexpected conclusion: on WEBNLG, Referring Expressions (REs) generated by the state-of-the-art neural models were not only indistinguishable from the REs in WEBNLG but also from the REs generated by a simple rule-based system. Here, we argue that this limitation could stem from the use of a purely ratings-based human evaluation (which is a common practice in Natural Language Generation). To investigate these issues, we propose an intrinsic task-based evaluation for REG models, in which, in addition to rating the quality of REs, participants were asked to accomplish two meta-level tasks. One of these tasks concerns the referential success of each RE; the other task asks participants to suggest a better alternative for each RE. The outcomes suggest that, in comparison to previous evaluations, the new evaluation protocol assesses the performance of each REG model more comprehensively and makes the participants’ ratings more reliable and discriminable.
Anthology ID:
2024.acl-long.389
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7220–7231
Language:
URL:
https://aclanthology.org/2024.acl-long.389
DOI:
10.18653/v1/2024.acl-long.389
Bibkey:
Cite (ACL):
Guanyi Chen, Fahime Same, and Kees Van Deemter. 2024. Intrinsic Task-based Evaluation for Referring Expression Generation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7220–7231, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Intrinsic Task-based Evaluation for Referring Expression Generation (Chen et al., ACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2024.acl-long.389.pdf