Abstract
Abstractive summarization models are commonly trained using maximum likelihood estimation, which assumes a deterministic (one-point) target distribution in which an ideal model will assign all the probability mass to the reference summary. This assumption may lead to performance degradation during inference, where the model needs to compare several system-generated (candidate) summaries that have deviated from the reference summary. To address this problem, we propose a novel training paradigm which assumes a non-deterministic distribution so that different candidate summaries are assigned probability mass according to their quality. Our method achieves a new state-of-the-art result on the CNN/DailyMail (47.78 ROUGE-1) and XSum (49.07 ROUGE-1) datasets. Further analysis also shows that our model can estimate probabilities of candidate summaries that are more correlated with their level of quality.- Anthology ID:
- 2022.acl-long.207
- Volume:
- Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2890–2903
- Language:
- URL:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2022.acl-long.207/
- DOI:
- 10.18653/v1/2022.acl-long.207
- Cite (ACL):
- Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022. BRIO: Bringing Order to Abstractive Summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2890–2903, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- BRIO: Bringing Order to Abstractive Summarization (Liu et al., ACL 2022)
- PDF:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2022.acl-long.207.pdf
- Code
- yixinl7/brio + additional community code
- Data
- CNN/Daily Mail, New York Times Annotated Corpus, XSum