Abstract
Effective summarisation evaluation metrics enable researchers and practitioners to compare different summarisation systems efficiently. Estimating the effectiveness of an automatic evaluation metric, termed meta-evaluation, is a critically important research question. In this position paper, we review recent meta-evaluation practices for summarisation evaluation metrics and find that (1) evaluation metrics are primarily meta-evaluated on datasets consisting of examples from news summarisation datasets, and (2) there has been a noticeable shift in research focus towards evaluating the faithfulness of generated summaries. We argue that the time is ripe to build more diverse benchmarks that enable the development of more robust evaluation metrics and analyze the generalization ability of existing evaluation metrics. In addition, we call for research focusing on user-centric quality dimensions that consider the generated summary’s communicative goal and the role of summarisation in the workflow.- Anthology ID:
- 2024.findings-emnlp.869
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2024
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 14795–14808
- Language:
- URL:
- https://aclanthology.org/2024.findings-emnlp.869
- DOI:
- 10.18653/v1/2024.findings-emnlp.869
- Cite (ACL):
- Xiang Dai, Sarvnaz Karimi, and Biaoyan Fang. 2024. A Critical Look at Meta-evaluating Summarisation Evaluation Metrics. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 14795–14808, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- A Critical Look at Meta-evaluating Summarisation Evaluation Metrics (Dai et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.findings-emnlp.869.pdf