Pirashanth Ratnamogan


2022

pdf bib
Robust Domain Adaptation for Pre-trained Multilingual Neural Machine Translation Models
Mathieu Grosso | Alexis Mathey | Pirashanth Ratnamogan | William Vanhuffel | Michael Fotso
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)

Recent literature has demonstrated the potential of multilingual Neural Machine Translation (mNMT) models. However, the most efficient models are not well suited to specialized industries. In these cases, internal data is scarce and expensive to find in all language pairs. Therefore, fine-tuning a mNMT model on a specialized domain is hard. In this context, we decided to focus on a new task: Domain Adaptation of a pre-trained mNMT model on a single pair of language while trying to maintain model quality on generic domain data for all language pairs. The risk of loss on generic domain and on other pairs is high. This task is key for mNMT model adoption in the industry and is at the border of many others. We propose a fine-tuning procedure for the generic mNMT that combines embeddings freezing and adversarial loss. Our experiments demonstrated that the procedure improves performances on specialized data with a minimal loss in initial performances on generic domain for all languages pairs, compared to a naive standard approach (+10.0 BLEU score on specialized data, -0.01 to -0.5 BLEU on WMT and Tatoeba datasets on the other pairs with M2M100).

2020

pdf
ACNLP at SemEval-2020 Task 6: A Supervised Approach for Definition Extraction
Fabien Caspani | Pirashanth Ratnamogan | Mathis Linger | Mhamed Hajaiej
Proceedings of the Fourteenth Workshop on Semantic Evaluation

We describe our contribution to two of the subtasks of SemEval 2020 Task 6, DeftEval: Extracting term-definition pairs in free text. The system for Subtask 1: Sentence Classification is based on a transformer architecture where we use transfer learning to fine-tune a pretrained model on the downstream task, and the one for Subtask 3: Relation Classification uses a Random Forest classifier with handcrafted dedicated features. Our systems respectively achieve 0.830 and 0.994 F1-scores on the official test set, and we believe that the insights derived from our study are potentially relevant to help advance the research on definition extraction.