2025
pdf
bib
abs
Wanda++: Pruning Large Language Models via Regional Gradients
Yifan Yang
|
Kai Zhen
|
Bhavana Ganesh
|
Aram Galstyan
|
Goeric Huybrechts
|
Markus Müller
|
Jonas M. Kübler
|
Rupak Vignesh Swaminathan
|
Athanasios Mouchtaris
|
Sravan Babu Bodapati
|
Nathan Susanj
|
Zheng Zhang
|
Jack FitzGerald
|
Abhishek Kumar
Findings of the Association for Computational Linguistics: ACL 2025
Large Language Models (LLMs) pruning seeks to remove unimportant weights for inference speedup with minimal accuracy impact. However, existing methods often suffer from accuracy degradation without full-model sparsity-aware fine-tuning. This paper presents Wanda++, a novel pruning framework that outperforms the state-of-the-art methods by utilizing decoder-block-level regional gradients. Specifically, Wanda++ improves the pruning score with regional gradients for the first time and proposes an efficient regional optimization method to minimize pruning-induced output discrepancies between the dense and sparse decoder output. Notably, Wanda++ improves perplexity by up to 32% over Wanda in the language modeling task and generalizes effectively to downstream tasks. Moreover, despite updating weights with regional optimization, Wanda++ remains orthogonal to sparsity-aware fine-tuning, further reducing perplexity with LoRA in great extend. Our approach is lightweight, pruning a 7B LLaMA model in under 10 minutes on a single H100 GPU.
2024
pdf
bib
abs
SpeechGuard: Exploring the Adversarial Robustness of Multi-modal Large Language Models
Raghuveer Peri
|
Sai Muralidhar Jayanthi
|
Srikanth Ronanki
|
Anshu Bhatia
|
Karel Mundnich
|
Saket Dingliwal
|
Nilaksh Das
|
Zejiang Hou
|
Goeric Huybrechts
|
Srikanth Vishnubhotla
|
Daniel Garcia-Romero
|
Sundararajan Srinivasan
|
Kyu Han
|
Katrin Kirchhoff
Findings of the Association for Computational Linguistics: ACL 2024
Integrated Speech and Large Language Models (SLMs) that can follow speech instructions and generate relevant text responses have gained popularity lately. However, the safety and robustness of these models remains largely unclear. In this work, we investigate the potential vulnerabilities of such instruction-following speech-language models to adversarial attacks and jailbreaking. Specifically, we design algorithms that can generate adversarial examples to jailbreak SLMs in both white-box and black-box attack settings without human involvement. Additionally, we propose countermeasures to thwart such jailbreaking attacks. Our models, trained on dialog data with speech instructions, achieve state-of-the-art performance on spoken question-answering task, scoring over 80% on both safety and helpfulness metrics. Despite safety guardrails, experiments on jailbreaking demonstrate the vulnerability of SLMs to adversarial perturbations and transfer attacks, with average attack success rates of 90% and 10% respectively when evaluated on a dataset of carefully designed harmful questions spanning 12 different toxic categories. However, we demonstrate that our proposed countermeasures reduce the attack success significantly.