Unlocking LLM Safeguards for Low-Resource Languages via Reasoning and Alignment with Minimal Training Data

Zhuowei Chen, Bowei Zhang, Nankai Lin, Tian Hou, Lianxi Wang


Abstract
Recent advances in LLMs have enhanced AI capabilities, but also increased the risk posed by malicious requests, highlighting the need for effective LLM safeguards to detect such queries. Existing approaches largely rely on classifier-based methods that lack interpretability and perform poorly on low-resource languages. To address these limitations, we propose ConsistentGuard, a novel reasoning-based multilingual safeguard, which enhances explainability via reasoning and boosts knowledge transfer between languages through alignment. With only 1,000 training samples, our method demonstrates superior performance on three datasets across six languages, outperforming larger models trained with significantly more data, and exhibits strong interpretability and generalization ability. We also contribute a multilingual benchmark extension and release our code to support future research.
Anthology ID:
2025.mrl-main.7
Volume:
Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025)
Month:
November
Year:
2025
Address:
Suzhuo, China
Editors:
David Ifeoluwa Adelani, Catherine Arnett, Duygu Ataman, Tyler A. Chang, Hila Gonen, Rahul Raja, Fabian Schmidt, David Stap, Jiayi Wang
Venues:
MRL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
96–105
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.mrl-main.7/
DOI:
Bibkey:
Cite (ACL):
Zhuowei Chen, Bowei Zhang, Nankai Lin, Tian Hou, and Lianxi Wang. 2025. Unlocking LLM Safeguards for Low-Resource Languages via Reasoning and Alignment with Minimal Training Data. In Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025), pages 96–105, Suzhuo, China. Association for Computational Linguistics.
Cite (Informal):
Unlocking LLM Safeguards for Low-Resource Languages via Reasoning and Alignment with Minimal Training Data (Chen et al., MRL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.mrl-main.7.pdf