Language-Specific Neurons Do Not Facilitate Cross-Lingual Transfer

Soumen Kumar Mondal, Sayambhu Sen, Abhishek Singhania, Preethi Jyothi


Abstract
Multilingual large language models (LLMs) aim towards robust natural language understanding across diverse languages, yet their performance significantly degrades on low-resource languages. This work explores whether existing techniques to identify language-specific neurons can be leveraged to enhance cross-lingual task performance of low-resource languages. We conduct detailed experiments covering existing language-specific neuron identification techniques (such as LanguageActivation Probability Entropy and activation probability-based thresholding) andneuron-specific LoRA fine-tuning with models like Llama 3.1 and Mistral Nemo. We find that such neuron-specific interventions are insufficient to yield cross-lingual improvements on downstream tasks (XNLI, XQuAD) in low-resource languages. This study highlights the challenges in achieving cross-lingual generalization and provides critical insights for multilingual LLMs.
Anthology ID:
2025.insights-1.6
Volume:
The Sixth Workshop on Insights from Negative Results in NLP
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Aleksandr Drozd, João Sedoc, Shabnam Tafreshi, Arjun Akula, Raphael Shu
Venues:
insights | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
46–62
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.insights-1.6/
DOI:
Bibkey:
Cite (ACL):
Soumen Kumar Mondal, Sayambhu Sen, Abhishek Singhania, and Preethi Jyothi. 2025. Language-Specific Neurons Do Not Facilitate Cross-Lingual Transfer. In The Sixth Workshop on Insights from Negative Results in NLP, pages 46–62, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Language-Specific Neurons Do Not Facilitate Cross-Lingual Transfer (Mondal et al., insights 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.insights-1.6.pdf