Lydia Nishimwe
2022
Inria-ALMAnaCH at WMT 2022: Does Transcription Help Cross-Script Machine Translation?
Jesujoba Alabi
|
Lydia Nishimwe
|
Benjamin Muller
|
Camille Rey
|
Benoît Sagot
|
Rachel Bawden
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper describes the Inria ALMAnaCH team submission to the WMT 2022 general translation shared task. Participating in the language directions cs,ru,uk→en and cs↔uk, we experiment with the use of a dedicated Latin-script transcription convention aimed at representing all Slavic languages involved in a way that maximises character- and word-level correspondences between them as well as with the English language. Our hypothesis was that bringing the source and target language closer could have a positive impact on machine translation results. We provide multiple comparisons, including bilingual and multilingual baselines, with and without transcription. Initial results indicate that the transcription strategy was not successful, resulting in lower results than baselines. We nevertheless submitted our multilingual, transcribed models as our primary systems, and in this paper provide some indications as to why we got these negative results.
The MRL 2022 Shared Task on Multilingual Clause-level Morphology
Omer Goldman
|
Francesco Tinner
|
Hila Gonen
|
Benjamin Muller
|
Victoria Basmov
|
Shadrack Kirimi
|
Lydia Nishimwe
|
Benoît Sagot
|
Djamé Seddah
|
Reut Tsarfaty
|
Duygu Ataman
Proceedings of the The 2nd Workshop on Multi-lingual Representation Learning (MRL)
The 2022 Multilingual Representation Learning (MRL) Shared Task was dedicated to clause-level morphology. As the first ever benchmark that defines and evaluates morphology outside its traditional lexical boundaries, the shared task on multilingual clause-level morphology sets the scene for competition across different approaches to morphological modeling, with 3 clause-level sub-tasks: morphological inflection, reinflection and analysis, where systems are required to generate, manipulate or analyze simple sentences centered around a single content lexeme and a set of morphological features characterizing its syntactic clause. This year’s tasks covered eight typologically distinct languages: English, French, German, Hebrew, Russian, Spanish, Swahili and Turkish. The tasks has received submissions of four systems from three teams which were compared to two baselines implementing prominent multilingual learning methods. The results show that modern NLP models are effective in solving morphological tasks even at the clause level. However, there is still room for improvement, especially in the task of morphological analysis.
Search
Co-authors
- Benjamin Muller 2
- Benoît Sagot 2
- Jesujoba Alabi 1
- Camille Rey 1
- Rachel Bawden 1
- show all...