Rahul Kejriwal


2024

pdf
CharSpan: Utilizing Lexical Similarity to Enable Zero-Shot Machine Translation for Extremely Low-resource Languages
Kaushal Maurya | Rahul Kejriwal | Maunendra Desarkar | Anoop Kunchukuttan
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)

We address the task of machine translation (MT) from extremely low-resource language (ELRL) to English by leveraging cross-lingual transfer from *closely-related* high-resource language (HRL). The development of an MT system for ELRL is challenging because these languages typically lack parallel corpora and monolingual corpora, and their representations are absent from large multilingual language models. Many ELRLs share lexical similarities with some HRLs, which presents a novel modeling opportunity. However, existing subword-based neural MT models do not explicitly harness this lexical similarity, as they only implicitly align HRL and ELRL latent embedding space. To overcome this limitation, we propose a novel, CharSpan, approach based on character-span noise augmentation into the training data of HRL. This serves as a regularization technique, making the model more robust to lexical divergences between the HRL and ELRL, thus facilitating effective cross-lingual transfer. Our method significantly outperformed strong baselines in zero-shot settings on closely related HRL and ELRL pairs from three diverse language families, emerging as the state-of-the-art model for ELRLs.

2021

pdf
A Large-scale Evaluation of Neural Machine Transliteration for Indic Languages
Anoop Kunchukuttan | Siddharth Jain | Rahul Kejriwal
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

We take up the task of large-scale evaluation of neural machine transliteration between English and Indic languages, with a focus on multilingual transliteration to utilize orthographic similarity between Indian languages. We create a corpus of 600K word pairs mined from parallel translation corpora and monolingual corpora, which is the largest transliteration corpora for Indian languages mined from public sources. We perform a detailed analysis of multilingual transliteration and propose an improved multilingual training recipe for Indic languages. We analyze various factors affecting transliteration quality like language family, transliteration direction and word origin.

2020

pdf
Contact Relatedness can help improve multilingual NMT: Microsoft STCI-MT @ WMT20
Vikrant Goyal | Anoop Kunchukuttan | Rahul Kejriwal | Siddharth Jain | Amit Bhagwat
Proceedings of the Fifth Conference on Machine Translation

We describe our submission for the English→Tamil and Tamil→English news translation shared task. In this submission, we focus on exploring if a low-resource language (Tamil) can benefit from a high-resource language (Hindi) with which it shares contact relatedness. We show utilizing contact relatedness via multilingual NMT can significantly improve translation quality for English-Tamil translation.

2018

pdf
Towards Predicting Age of Acquisition of Words Using a Dictionary Network
Ditty Mathew | Girish Raguvir Jeyakumar | Rahul Kejriwal | Sutanu Chakraborti
Proceedings of the 15th International Conference on Natural Language Processing