AlignSum: Data Pyramid Hierarchical Fine-tuning for Aligning with Human Summarization Preference

Yang Han, Yiming Wang, Rui Wang, Lu Chen, Kai Yu


Abstract
Text summarization tasks commonly employ Pre-trained Language Models (PLMs) to fit diverse standard datasets. While these PLMs excel in automatic evaluations, they frequently underperform in human evaluations, indicating a deviation between their generated summaries and human summarization preferences. This discrepancy is likely due to the low quality of fine-tuning datasets and the limited availability of high-quality human-annotated data that reflect true human preference. To address this challenge, we introduce a novel human summarization preference alignment framework AlignSum. This framework consists of three parts: Firstly, we construct a Data Pymarid with extractive, abstractive, and human-annotated summary data. Secondly, we conduct the Gaussian Resampling to remove summaries with extreme lengths. Finally, we implement the two-stage hierarchical fine-tuning with Data Pymarid after Gaussian Resampling. We apply AlignSum to PLMs on the human-annotated CNN/DailyMail and BBC XSum datasets. Experiments show that with AlignSum, PLMs like BART-Large surpass 175B GPT-3 in both automatic and human evaluations. This demonstrates that AlignSum significantly enhances the alignment of language models with human summarization preferences.
Anthology ID:
2024.findings-emnlp.498
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8506–8522
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.498
DOI:
10.18653/v1/2024.findings-emnlp.498
Bibkey:
Cite (ACL):
Yang Han, Yiming Wang, Rui Wang, Lu Chen, and Kai Yu. 2024. AlignSum: Data Pyramid Hierarchical Fine-tuning for Aligning with Human Summarization Preference. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 8506–8522, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
AlignSum: Data Pyramid Hierarchical Fine-tuning for Aligning with Human Summarization Preference (Han et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2024.findings-emnlp.498.pdf
Software:
 2024.findings-emnlp.498.software.zip