Large Language Models are Easily Confused: A Quantitative Metric, Security Implications and Typological Analysis

Yiyi Chen, Qiongxiu Li, Russa Biswas, Johannes Bjerva


Abstract
Language Confusion is a phenomenon where Large Language Models (LLMs) generate text that is neither in the desired language, nor in a contextually appropriate language. This phenomenon presents a critical challenge in text generation by LLMs, often appearing as erratic and unpredictable behavior. We hypothesize that there are linguistic regularities to this inherent vulnerability in LLMs and shed light on patterns of language confusion across LLMs. We introduce a novel metric, Language Confusion Entropy, designed to directly measure and quantify this confusion, based on language distributions informed by linguistic typology and lexical variation. Comprehensive comparisons with the Language Confusion Benchmark (Marchisio et al., 2024) confirm the effectiveness of our metric, revealing patterns of language confusion across LLMs. We further link language confusion to LLM security, and find patterns in the case of multilingual embedding inversion attacks. Our analysis demonstrates that linguistic typology offers theoretically grounded interpretation, and valuable insights into leveraging language similarities as a prior for LLM alignment and security.
Anthology ID:
2025.findings-naacl.210
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3810–3827
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-naacl.210/
DOI:
Bibkey:
Cite (ACL):
Yiyi Chen, Qiongxiu Li, Russa Biswas, and Johannes Bjerva. 2025. Large Language Models are Easily Confused: A Quantitative Metric, Security Implications and Typological Analysis. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 3810–3827, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Large Language Models are Easily Confused: A Quantitative Metric, Security Implications and Typological Analysis (Chen et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-naacl.210.pdf