Abstract
Pre-trained abstractive summarization models can generate fluent summaries and achieve high ROUGE scores. Previous research has found that these models often generate summaries that are inconsistent with their context document and contain nonfactual information. To evaluate factuality in document summarization, a document-level Natural Language Inference (NLI) classifier can be used. However, training such a classifier requires large-scale high-quality factual and nonfactual samples. To that end, we introduce NonFactS, a data generation model, to synthesize nonfactual summaries given a context document and a human-annotated (reference) factual summary. Compared to previous methods, our nonfactual samples are more abstractive and more similar to their corresponding factual samples, resulting in state-of-the-art performance on two factuality evaluation benchmarks, FALSESUM and SUMMAC. Our experiments demonstrate that even without human-annotated summaries, NonFactS can use random sentences to generate nonfactual summaries and a classifier trained on these samples generalizes to out-of-domain documents.- Anthology ID:
- 2023.findings-acl.400
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2023
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6405–6419
- Language:
- URL:
- https://aclanthology.org/2023.findings-acl.400
- DOI:
- 10.18653/v1/2023.findings-acl.400
- Cite (ACL):
- Amir Soleimani, Christof Monz, and Marcel Worring. 2023. NonFactS: NonFactual Summary Generation for Factuality Evaluation in Document Summarization. In Findings of the Association for Computational Linguistics: ACL 2023, pages 6405–6419, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- NonFactS: NonFactual Summary Generation for Factuality Evaluation in Document Summarization (Soleimani et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2023.findings-acl.400.pdf