Yan Solihin
2025
Evaluating the Robustness and Accuracy of Text Watermarking Under Real-World Cross-Lingual Manipulations
Mansour Al Ghanim
|
Jiaqi Xue
|
Rochana Prih Hastuti
|
Mengxin Zheng
|
Yan Solihin
|
Qian Lou
Findings of the Association for Computational Linguistics: EMNLP 2025
We present a study to benchmark representative watermarking methods in cross-lingual settings. The current literature mainly focuses on the evaluation of watermarking methods for the English language. However, the literature for evaluating watermarking in cross-lingual settings is scarce. This results in overlooking important adversary scenarios in which a cross-lingual adversary could be in, leading to a gray area of practicality over cross-lingual watermarking. In this paper, we evaluate four watermarking methods in four different and vocabulary rich languages. Our experiments investigate the quality of text under different watermarking procedure and the detectability of watermarks with practical translation attack scenarios. Specifically, we investigate practical scenarios that an adversary with cross-lingual knowledge could take, and evaluate whether current watermarking methods are suitable for such scenarios. Finally, from our findings, we draw key insights about watermarking in cross-lingual settings.
2024
Jailbreaking LLMs with Arabic Transliteration and Arabizi
Mansour Al Ghanim
|
Saleh Almohaimeed
|
Mengxin Zheng
|
Yan Solihin
|
Qian Lou
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
This study identifies the potential vulnerabilities of Large Language Models (LLMs) to ‘jailbreak’ attacks, specifically focusing on the Arabic language and its various forms. While most research has concentrated on English-based prompt manipulation, our investigation broadens the scope to investigate the Arabic language. We initially tested the AdvBench benchmark in Standardized Arabic, finding that even with prompt manipulation techniques like prefix injection, it was insufficient to provoke LLMs into generating unsafe content. However, when using Arabic transliteration and chatspeak (or arabizi), we found that unsafe content could be produced on platforms like OpenAI GPT-4 and Anthropic Claude 3 Sonnet. Our findings suggest that using Arabic and its various forms could expose information that might remain hidden, potentially increasing the risk of jailbreak attacks. We hypothesize that this exposure could be due to the model’s learned connection to specific words, highlighting the need for more comprehensive safety training across all language forms.
Search
Fix author
Co-authors
- Mansour Al Ghanim 2
- Qian Lou 2
- Mengxin Zheng 2
- Saleh Almohaimeed 1
- Rochana Prih Hastuti 1
- show all...