Abstract
Factual inconsistencies existed in the output of abstractive summarization models with original documents are frequently presented. Fact consistency assessment requires the reasoning capability to find subtle clues to identify whether a model-generated summary is consistent with the original document. This paper proposes a fine-grained two-stage Fact Consistency assessment framework for Summarization models (SumFC). Given a document and a summary sentence, in the first stage, SumFC selects the top-K most relevant sentences with the summary sentence from the document. In the second stage, the model performs fine-grained consistency reasoning at the sentence level, and then aggregates all sentences’ consistency scores to obtain the final assessment result. We get the training data pairs by data synthesis and adopt contrastive loss of data pairs to help the model identify subtle cues. Experiment results show that SumFC has made a significant improvement over the previous state-of-the-art methods. Our experiments also indicate that SumFC distinguishes detailed differences better.- Anthology ID:
- 2021.emnlp-main.9
- Volume:
- Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2021
- Address:
- Online and Punta Cana, Dominican Republic
- Editors:
- Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 107–116
- Language:
- URL:
- https://aclanthology.org/2021.emnlp-main.9
- DOI:
- 10.18653/v1/2021.emnlp-main.9
- Cite (ACL):
- Sen Zhang, Jianwei Niu, and Chuyuan Wei. 2021. Fine-grained Factual Consistency Assessment for Abstractive Summarization Models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 107–116, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- Fine-grained Factual Consistency Assessment for Abstractive Summarization Models (Zhang et al., EMNLP 2021)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2021.emnlp-main.9.pdf
- Data
- FEVER, MultiNLI