Less Is More? Examining Fairness in Pruned Large Language Models for Summarising Opinions

Nannan Huang, Haytham M. Fayek, Xiuzhen Zhang


Abstract
Model compression through post-training pruning offers a way to reduce model size and computational requirements without significantly impacting model performance. However, the effect of pruning on the fairness of LLM-generated summaries remains unexplored, particularly for opinion summarisation where biased outputs could influence public views. In this paper, we present a comprehensive empirical analysis of opinion summarisation, examining three state-of-the-art pruning methods and various calibration sets across three open-source LLMs using four fairness metrics. Our systematic analysis reveals that pruning methods have larger impact on fairness than calibration sets. Building on these insights, we propose High Gradient Low Activation (HGLA) pruning, which identifies and removes parameters that are redundant for input processing but influential in output generation. Our experiments demonstrate that HGLA can better maintain or even improve fairness compared to existing methods, showing promise across models and tasks where traditional methods have limitations. Our human evaluation shows HGLA-generated outputs are fairer than existing state-of-the-art pruning methods.
Anthology ID:
2025.emnlp-main.909
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18005–18029
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.909/
DOI:
Bibkey:
Cite (ACL):
Nannan Huang, Haytham M. Fayek, and Xiuzhen Zhang. 2025. Less Is More? Examining Fairness in Pruned Large Language Models for Summarising Opinions. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 18005–18029, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Less Is More? Examining Fairness in Pruned Large Language Models for Summarising Opinions (Huang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.909.pdf
Checklist:
 2025.emnlp-main.909.checklist.pdf