GECSum: Generative Evaluation-Driven Sequence Level Contrastive Learning for Abstractive Summarization

Jiawen Xie, Shaoting Zhang, Xiaofan Zhang


Abstract
While dominant in abstractive summarization, transformer-based language models with the standard maximum likelihood estimation (MLE) training remain challenged by two discrepancies: the misalignment between token-level training and sequence-level evaluation, and the divergence between teacher-forcing training manner and auto-regressive generation behavior. Recent studies have shown that sequence-level contrastive learning, which utilizes the quality differences between multiple summaries as prior information, can effectively mitigate these issues. However, as certain evaluation metrics often determine the contrastive signals in existing methods, this leads to the model performance aligning with the preferences of these metrics being limited by the evaluation capabilities of these metrics. Inspired by prior works that treat the evaluation of generated text as a text generation problem, we propose a generative evaluation-driven contrastive learning framework, which leverages the semantic understanding capabilities of the abstractive model itself to evaluate summary in reference-based settings. In this way, our method establishes a connection between the model’s reference-based evaluation and reference-free generation scenarios, allowing them to share the benefits of model capability enhancements. Extensive experiments on four summarization datasets demonstrate that our method outperforms the previous state-of-the-art regarding comprehensive performance. Various empirical analyses further substantiate the effectiveness of our method.
Anthology ID:
2024.lrec-main.670
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
7581–7595
Language:
URL:
https://aclanthology.org/2024.lrec-main.670
DOI:
Bibkey:
Cite (ACL):
Jiawen Xie, Shaoting Zhang, and Xiaofan Zhang. 2024. GECSum: Generative Evaluation-Driven Sequence Level Contrastive Learning for Abstractive Summarization. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 7581–7595, Torino, Italia. ELRA and ICCL.
Cite (Informal):
GECSum: Generative Evaluation-Driven Sequence Level Contrastive Learning for Abstractive Summarization (Xie et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2024.lrec-main.670.pdf