@inproceedings{arya-2025-monolingual,
    title = "Monolingual Adapter Networks for Efficient Cross-Lingual Alignment",
    author = "Arya, Pulkit",
    editor = "Adelani, David Ifeoluwa  and
      Arnett, Catherine  and
      Ataman, Duygu  and
      Chang, Tyler A.  and
      Gonen, Hila  and
      Raja, Rahul  and
      Schmidt, Fabian  and
      Stap, David  and
      Wang, Jiayi",
    booktitle = "Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025)",
    month = nov,
    year = "2025",
    address = "Suzhuo, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.mrl-main.24/",
    pages = "360--368",
    ISBN = "979-8-89176-345-6",
    abstract = "Multilingual alignment for low-resource languages is a challenge for embedding models. The scarcity of parallel datasets in addition to rich morphological diversity in languages adds to the complexity of training multilingual embedding models. To aid in the development of multilingual models for under-represented languages such as Sanskrit, we introduce GitaDB: a collection of 640 Sanskrit verses translated in 5 Indic languages and English. We benchmarked various state-of-the-art embedding models on our dataset in different bilingual and cross-lingual semantic retrieval tasks of increasing complexity and found a steep degradation in retrieval scores. We found a wide margin in the retrieval performance between English and Sanskrit targets. To bridge this gap, we introduce Monolingual Adapter Networks: a parameter-efficient method to bolster cross-lingual alignment of embedding models without the need for parallel corpora or full finetuning."
}Markdown (Informal)
[Monolingual Adapter Networks for Efficient Cross-Lingual Alignment](https://preview.aclanthology.org/ingest-emnlp/2025.mrl-main.24/) (Arya, MRL 2025)
ACL