AdvSumm: Adversarial Training for Bias Mitigation in Text Summarization

Mukur Gupta, Nikhil Reddy Varimalla, Nicholas Deas, Melanie Subbiah, Kathleen McKeown


Abstract
Large Language Models (LLMs) have achieved impressive performance in text summarization and are increasingly deployed in real-world applications. However, these systems often inherit associative and framing biases from pre-training data, leading to inappropriate or unfair outputs in downstream tasks. In this work, we present AdvSumm (Adversarial Summarization), a domain-agnostic training framework designed to mitigate bias in text summarization through improved generalization. Inspired by adversarial robustness, AdvSumm introduces a novel Perturber component that applies gradient-guided perturbations at the embedding level of Sequence-to-Sequence models, enhancing the model’s robustness to input variations. We empirically demonstrate that AdvSumm effectively reduces different types of bias in summarization—specifically, name-nationality bias and political framing bias—without compromising summarization quality. Compared to standard transformers and data augmentation techniques like back-translation, AdvSumm achieves stronger bias mitigation performance across benchmark datasets.
Anthology ID:
2025.newsum-main.12
Volume:
Proceedings of The 5th New Frontiers in Summarization Workshop
Month:
November
Year:
2025
Address:
Hybrid
Editors:
Yue Dong, Wen Xiao, Haopeng Zhang, Rui Zhang, Ori Ernst, Lu Wang, Fei Liu
Venues:
NewSum | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
172–182
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.newsum-main.12/
DOI:
Bibkey:
Cite (ACL):
Mukur Gupta, Nikhil Reddy Varimalla, Nicholas Deas, Melanie Subbiah, and Kathleen McKeown. 2025. AdvSumm: Adversarial Training for Bias Mitigation in Text Summarization. In Proceedings of The 5th New Frontiers in Summarization Workshop, pages 172–182, Hybrid. Association for Computational Linguistics.
Cite (Informal):
AdvSumm: Adversarial Training for Bias Mitigation in Text Summarization (Gupta et al., NewSum 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.newsum-main.12.pdf