Maxime Bouthors


2024

pdf
Retrieving Examples from Memory for Retrieval Augmented Neural Machine Translation: A Systematic Comparison
Maxime Bouthors | Josep Crego | François Yvon
Findings of the Association for Computational Linguistics: NAACL 2024

Retrieval-Augmented Neural Machine Translation (RAMT) architectures retrieve examples from memory to guide the generation process. While most works in this trend explore new ways to exploit the retrieved examples, the upstream retrieval step is mostly unexplored. In this paper, we study the effect of varying retrieval methods for several translation architectures to better understand the interplay between these two processes.We conduct experiments in two language pairs in a multi-domain setting and consider several downstream architectures based on a standard autoregressive model, an edit-based model, and a large language model with in-context learning. Our experiments show that the choice of the retrieval technique impacts the translation scores, with variance across architectures. We also discuss the effects of increasing the number and diversity of examples, which are mostly positive across the board.

pdf
Optimiser le choix des exemples pour la traduction automatique augmentée par des mémoires de traduction
Maxime Bouthors | Josep Crego | François Yvon
Actes de la 31ème Conférence sur le Traitement Automatique des Langues Naturelles, volume 1 : articles longs et prises de position

La traduction neuronale à partir d’exemples s’appuie sur l’exploitation d’une mémoire de traduction contenant des exemples similaires aux phrases à traduire. Ces exemples sont utilisés pour conditionner les prédictions d’un décodeur neuronal. Nous nous intéressons à l’amélioration du système qui effectue l’étape de recherche des phrases similaires, l’architecture du décodeur neuronal étant fixée et reposant ici sur un modèle explicite d’édition, le Transformeur multi-Levenshtein. Le problème considéré consiste à trouver un ensemble optimal d’exemples similaires, c’est-à-dire qui couvre maximalement la phrase source. En nous appuyant sur la théorie des fonctions sous-modulaires, nous explorons de nouveaux algorithmes pour optimiser cette couverture et évaluons les améliorations de performances auxquels ils mènent pour la tâche de traduction automatique.

2023

pdf
Towards Example-Based NMT with Multi-Levenshtein Transformers
Maxime Bouthors | Josep Crego | François Yvon
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Retrieval-Augmented Machine Translation (RAMT) is attracting growing attention. This is because RAMT not only improves translation metrics, but is also assumed to implement some form of domain adaptation. In this contribution, we study another salient trait of RAMT, its ability to make translation decisions more transparent by allowing users to go back to examples that contributed to these decisions. For this, we propose a novel architecture aiming to increase this transparency. This model adapts a retrieval-augmented version of the Levenshtein Transformer and makes it amenable to simultaneously edit multiple fuzzy matches found in memory. We discuss how to perform training and inference in this model, based on multi-way alignment algorithms and imitation learning. Our experiments show that editing several examples positively impacts translation scores, notably increasing the number of target spans that are copied from existing instances.