Improving Large Language Model Safety with Contrastive Representation Learning

Samuel Simko, Mrinmaya Sachan, Bernhard Schölkopf, Zhijing Jin


Abstract
Large Language Models (LLMs) are powerful tools with profound societal impacts, yet their ability to generate responses to diverse and uncontrolled inputs leaves them vulnerable to adversarial attacks. While existing defenses often struggle to generalize across varying attack types, recent advancements in representation engineering offer promising alternatives. In this work, we propose a defense framework that formulates model defense as a contrastive representation learning (CRL) problem. Our method finetunes a model using a triplet-based loss combined with adversarial hard negative mining to encourage separation between benign and harmful representations. Our experimental results across multiple models demonstrate that our approach outperforms prior representation engineering-based defenses, improving robustness against both input-level and embedding-space attacks without compromising standard performance.
Anthology ID:
2025.emnlp-main.1430
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
28154–28182
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1430/
DOI:
Bibkey:
Cite (ACL):
Samuel Simko, Mrinmaya Sachan, Bernhard Schölkopf, and Zhijing Jin. 2025. Improving Large Language Model Safety with Contrastive Representation Learning. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 28154–28182, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Improving Large Language Model Safety with Contrastive Representation Learning (Simko et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1430.pdf
Checklist:
 2025.emnlp-main.1430.checklist.pdf