Improving Fairness of Large Language Models in Multi-document Summarization

Haoyuan Li, Rui Zhang, Snigdha Chaturvedi


Abstract
Fairness in multi-document summarization (MDS) is crucial for providing comprehensive views across documents with diverse social attribute values, which can significantly impact decision-making. For example, a summarization system that tends to overrepresent negative reviews of products can mislead customers into disregarding good products. Previous works measure fairness in MDS at two levels: summary-level and corpus-level. While summary-level fairness focuses on individual summaries, corpus-level fairness focuses on a corpus of summaries. Recent methods primarily focus on summary-level fairness. We propose FairPO, a preference tuning method that focuses on both summary-level and corpus-level fairness in MDS. To improve summary-level fairness, we propose to generate preference pairs by perturbing document sets. To improve corpus-level fairness, we propose fairness-aware preference tuning by dynamically adjusting the weights of preference pairs. Our experiments show that FairPO outperforms strong baselines while maintaining the critical qualities of summaries. The code is available at https://github.com/leehaoyuan/coverage_fairness
Anthology ID:
2025.acl-short.90
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1143–1154
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.acl-short.90/
DOI:
Bibkey:
Cite (ACL):
Haoyuan Li, Rui Zhang, and Snigdha Chaturvedi. 2025. Improving Fairness of Large Language Models in Multi-document Summarization. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1143–1154, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Improving Fairness of Large Language Models in Multi-document Summarization (Li et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.acl-short.90.pdf