Improving Cross Lingual Transfer by Pretraining with Active Forgetting

Divyanshu Aggarwal, Ashutosh Sathe, Sunayana Sitaram


Abstract
Large Language Models (LLMs) demonstrate exceptional capabilities in a multitude of NLP tasks. However, the efficacy of such models to languages other than English is often limited. Prior works have shown that encoder-only models such as BERT or XLM-RoBERTa show impressive cross lingual transfer of their capabilities from English to other languages. In this work, we propose a pretraining strategy that uses active forgetting to achieve similar cross lingual transfer in decoder-only LLMs. We show that LLMs pretrained with active forgetting are highly effective when adapting to new and unseen languages. Through extensive experimentation, we find that LLMs pretrained with active forgetting are able to learn better multilingual representations which translates to better performance in many downstream tasks.
Anthology ID:
2025.emnlp-main.120
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2367–2378
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.120/
DOI:
Bibkey:
Cite (ACL):
Divyanshu Aggarwal, Ashutosh Sathe, and Sunayana Sitaram. 2025. Improving Cross Lingual Transfer by Pretraining with Active Forgetting. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 2367–2378, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Improving Cross Lingual Transfer by Pretraining with Active Forgetting (Aggarwal et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.120.pdf
Checklist:
 2025.emnlp-main.120.checklist.pdf