2024
pdf
abs
Exploring NMT Explainability for Translators Using NMT Visualising Tools
Gabriela Gonzalez-Saez
|
Mariam Nakhle
|
James Turner
|
Fabien Lopez
|
Nicolas Ballier
|
Marco Dinarelli
|
Emmanuelle Esperança-Rodier
|
Sui He
|
Raheel Qader
|
Caroline Rossi
|
Didier Schwab
|
Jun Yang
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 1)
This paper describes work in progress on Visualisation tools to foster collaborations between translators and computational scientists. We aim to describe how visualisation features can be used to explain translation and NMT outputs. We tested several visualisation functionalities with three NMT models based on Chinese-English, Spanish-English and French-English language pairs. We created three demos containing different visualisation tools and analysed them within the framework of performance-explainability, focusing on the translator’s perspective.
pdf
abs
The MAKE-NMTViz Project: Meaningful, Accurate and Knowledge-limited Explanations of NMT Systems for Translators
Gabriela Gonzalez-Saez
|
Fabien Lopez
|
Mariam Nakhle
|
James Turner
|
Nicolas Ballier
|
Marco Dinarelli
|
Emmanuelle Esperança-Rodier
|
Sui He
|
Caroline Rossi
|
Didier Schwab
|
Jun Yang
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 2)
This paper describes MAKE-NMTViz, a project designed to help translators visualize neural machine translation outputs using explainable artificial intelligence visualization tools initially developed for computer vision.
pdf
abs
Literacy in Digital Environments and Resources (LT-LiDER)
Joss Moorkens
|
Pilar Sánchez-Gijón
|
Esther Simon
|
Mireia Urpí
|
Nora Aranberri
|
Dragoș Ciobanu
|
Ana Guerberof-Arenas
|
Janiça Hackenbuchner
|
Dorothy Kenny
|
Ralph Krüger
|
Miguel Rios
|
Isabel Ginel
|
Caroline Rossi
|
Alina Secară
|
Antonio Toral
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 2)
LT-LiDER is an Erasmus+ cooperation project with two main aims. The first is to map the landscape of technological capabilities required to work as a language and/or translation expert in the digitalised and datafied language industry. The second is to generate training outputs that will help language and translation trainers improve their skills and adopt appropriate pedagogical approaches and strategies for integrating data-driven technology into their language or translation classrooms, with a focus on digital and AI literacy.
2023
pdf
abs
The MAKE-NMTVIZ System Description for the WMT23 Literary Task
Fabien Lopez
|
Gabriela González
|
Damien Hansen
|
Mariam Nakhle
|
Behnoosh Namdarzadeh
|
Nicolas Ballier
|
Marco Dinarelli
|
Emmanuelle Esperança-Rodier
|
Sui He
|
Sadaf Mohseni
|
Caroline Rossi
|
Didier Schwab
|
Jun Yang
|
Jean-Baptiste Yunès
|
Lichao Zhu
Proceedings of the Eighth Conference on Machine Translation
This paper describes the MAKE-NMTVIZ Systems trained for the WMT 2023 Literary task. As a primary submission, we used Train, Valid1, test1 as part of the GuoFeng corpus (Wang et al., 2023) to fine-tune the mBART50 model with Chinese-English data. We followed very similar training parameters to (Lee et al. 2022) when fine-tuning mBART50. We trained for 3 epochs, using gelu as an activation function, with a learning rate of 0.05, dropout of 0.1 and a batch size of 16. We decoded using a beam search of size 5. For our contrastive1 submission, we implemented a fine-tuned concatenation transformer (Lupo et al., 2023). The training was developed in two steps: (i) a sentence-level transformer was implemented for 10 epochs trained using general, test1, and valid1 data (more details in contrastive2 system); (ii) second, we fine-tuned at document-level using 3-sentence concatenation for 4 epochs using train, test2, and valid2 data. During the fine-tuning, we used ReLU as an activation function, with an inverse square root learning rate, dropout of 0.1, and a batch size of 64. We decoded using a beam search of size. Four our contrastive2 and last submission, we implemented a sentence-level transformer model (Vaswani et al., 2017). The model was trained with general data for 10 epochs using general-purpose, test1, and valid 1 data. The training parameters were an inverse square root scheduled learning rate, a dropout of 0.1, and a batch size of 64. We decoded using a beam search of size 4. We then compared the three translation outputs from an interdisciplinary perspective, investigating some of the effects of sentence- vs document-based training. Computer scientists, translators and corpus linguists discussed the linguistic remaining issues for this discourse-level literary translation.
2022
pdf
abs
MultitraiNMT Erasmus+ project: Machine Translation Training for multilingual citizens (multitrainmt.eu)
Mikel L. Forcada
|
Pilar Sánchez-Gijón
|
Dorothy Kenny
|
Felipe Sánchez-Martínez
|
Juan Antonio Pérez Ortiz
|
Riccardo Superbo
|
Gema Ramírez Sánchez
|
Olga Torres-Hostench
|
Caroline Rossi
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation
The MultitraiNMT Erasmus+ project has developed an open innovative syl-labus in machine translation, focusing on neural machine translation (NMT) and targeting both language learners and translators. The training materials include an open access coursebook with more than 250 activities and a pedagogical NMT interface called MutNMT that allows users to learn how neural machine translation works. These materials will allow students to develop the technical and ethical skills and competences required to become informed, critical users of machine translation in their own language learn-ing and translation practice. The pro-ject started in July 2019 and it will end in July 2022.
2021
pdf
abs
MultiTraiNMT: Training Materials to Approach Neural Machine Translation from Scratch
Gema Ramírez-Sánchez
|
Juan Antonio Pérez-Ortiz
|
Felipe Sánchez-Martínez
|
Caroline Rossi
|
Dorothy Kenny
|
Riccardo Superbo
|
Pilar Sánchez-Gijón
|
Olga Torres-Hostench
Proceedings of the Translation and Interpreting Technology Online Conference
The MultiTraiNMT Erasmus+ project aims at developing an open innovative syllabus in neural machine translation (NMT) for language learners and translators as multilingual citizens. Machine translation is seen as a resource that can support citizens in their attempt to acquire and develop language skills if they are trained in an informed and critical way. Machine translation could thus help tackle the mismatch between the desired EU aim of having multilingual citizens who speak at least two foreign languages and the current situation in which citizens generally fall far short of this objective. The training materials consists of an open-access coursebook, an open-source NMT web application called MutNMT for training purposes, and corresponding activities.
2019
pdf
With or without post-editing processes? Evidence for a gap in machine translation evaluation
Caroline Rossi
|
Emmanuelle Esperança-Rodier
Proceedings of the Second MEMENTO workshop on Modelling Parameters of Cognitive Effort in Translation Production