Questioning the Validity of Summarization Datasets and Improving Their Factual Consistency

Yanzhu Guo, Chloé Clavel, Moussa Kamal Eddine, Michalis Vazirgiannis


Abstract
The topic of summarization evaluation has recently attracted a surge of attention due to the rapid development of abstractive summarization systems. However, the formulation of the task is rather ambiguous, neither the linguistic nor the natural language processing communities have succeeded in giving a mutually agreed-upon definition. Due to this lack of well-defined formulation, a large number of popular abstractive summarization datasets are constructed in a manner that neither guarantees validity nor meets one of the most essential criteria of summarization: factual consistency. In this paper, we address this issue by combining state-of-the-art factual consistency models to identify the problematic instances present in popular summarization datasets. We release SummFC, a filtered summarization dataset with improved factual consistency, and demonstrate that models trained on this dataset achieve improved performance in nearly all quality aspects. We argue that our dataset should become a valid benchmark for developing and evaluating summarization systems.
Anthology ID:
2022.emnlp-main.386
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5716–5727
Language:
URL:
https://aclanthology.org/2022.emnlp-main.386
DOI:
Bibkey:
Cite (ACL):
Yanzhu Guo, Chloé Clavel, Moussa Kamal Eddine, and Michalis Vazirgiannis. 2022. Questioning the Validity of Summarization Datasets and Improving Their Factual Consistency. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5716–5727, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Questioning the Validity of Summarization Datasets and Improving Their Factual Consistency (Guo et al., EMNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.emnlp-main.386.pdf