Breach in the Shield: Unveiling the Vulnerabilities of Large Language Models

Runpeng Dai, Run Yang, Fan Zhou, Hongtu Zhu


Abstract
Large Language Models and Vision-Language Models have achieved impressive performance across a wide range of tasks, yet they remain vulnerable to carefully crafted perturbations. In this study, we seek to pinpoint the sources of this fragility by identifying parameters and input dimensions (pixels or token embeddings) that are susceptible to such perturbations. To this end, we propose a stability measure called FI, First order local Influence, which is rooted in information geometry and quantifies the sensitivity of individual parameter and input dimensions. Our extensive analysis across LLMs and VLMs (from 1.5B to 13B parameters) reveals that: (I) A small subset of parameters or input dimensions with high FI values disproportionately contribute to model brittleness. (II) Mitigating the influence of these vulnerable parameters during model merging leads to improved performance.
Anthology ID:
2026.eacl-long.161
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3509–3521
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.161/
DOI:
Bibkey:
Cite (ACL):
Runpeng Dai, Run Yang, Fan Zhou, and Hongtu Zhu. 2026. Breach in the Shield: Unveiling the Vulnerabilities of Large Language Models. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3509–3521, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Breach in the Shield: Unveiling the Vulnerabilities of Large Language Models (Dai et al., EACL 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.161.pdf