SEA: Low-Resource Safety Alignment for Multimodal Large Language Models via Synthetic Embeddings

Weikai Lu, Hao Peng, Huiping Zhuang, Cen Chen, Ziqian Zeng


Abstract
Multimodal Large Language Models (MLLMs) have serious security vulnerabilities. While safety alignment using multimodal datasets consisting of text and data of additional modalities can effectively enhance MLLM’s security, it is costly to construct these datasets. Existing low-resource security alignment methods, including textual alignment, have been found to struggle with the security risks posed by additional modalities. To address this, we propose Synthetic Embedding augmented safety Alignment (SEA), which optimizes embeddings of additional modality through gradient updates to expand textual datasets. This enables multimodal safety alignment training even when only textual data is available. Extensive experiments on image, video, and audio-based MLLMs demonstrate that SEA can synthesize a high-quality embedding on a single RTX3090 GPU within 24 seconds. SEA significantly improves the security of MLLMs when faced with threats from additional modalities. To assess the security risks introduced by video and audio, we also introduced a new benchmark called VA-SafetyBench. High attack success rates across multiple MLLMs validate its challenge. Our code and data will be available at https://github.com/ZeroNLP/SEA.
Anthology ID:
2025.acl-long.1212
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24894–24913
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1212/
DOI:
Bibkey:
Cite (ACL):
Weikai Lu, Hao Peng, Huiping Zhuang, Cen Chen, and Ziqian Zeng. 2025. SEA: Low-Resource Safety Alignment for Multimodal Large Language Models via Synthetic Embeddings. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 24894–24913, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
SEA: Low-Resource Safety Alignment for Multimodal Large Language Models via Synthetic Embeddings (Lu et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1212.pdf