2023
pdf
Using Whisper LLM for Automatic Phonetic Diagnosis of L2 Speech, a Case Study with French Learners of English
Nicolas Ballier
|
Adrien Meli
|
Maelle Amand
|
Jean-Baptiste Yunès
Proceedings of the 6th International Conference on Natural Language and Speech Processing (ICNLSP 2023)
pdf
abs
Investigating Techniques for a Deeper Understanding of Neural Machine Translation (NMT) Systems through Data Filtering and Fine-tuning Strategies
Lichao Zhu
|
Maria Zimina
|
Maud Bénard
|
Behnoosh Namdar
|
Nicolas Ballier
|
Guillaume Wisniewski
|
Jean-Baptiste Yunès
Proceedings of the Eighth Conference on Machine Translation
In the context of this biomedical shared task, we have implemented data filters to enhance the selection of relevant training data for fine- tuning from the available training data sources. Specifically, we have employed textometric analysis to detect repetitive segments within the test set, which we have then used for re- fining the training data used to fine-tune the mBart-50 baseline model. Through this approach, we aim to achieve several objectives: developing a practical fine-tuning strategy for training biomedical in-domain fr<>en models, defining criteria for filtering in-domain training data, and comparing model predictions, fine-tuning data in accordance with the test set to gain a deeper insight into the functioning of Neural Machine Translation (NMT) systems.
pdf
abs
The MAKE-NMTVIZ System Description for the WMT23 Literary Task
Fabien Lopez
|
Gabriela González
|
Damien Hansen
|
Mariam Nakhle
|
Behnoosh Namdarzadeh
|
Nicolas Ballier
|
Marco Dinarelli
|
Emmanuelle Esperança-Rodier
|
Sui He
|
Sadaf Mohseni
|
Caroline Rossi
|
Didier Schwab
|
Jun Yang
|
Jean-Baptiste Yunès
|
Lichao Zhu
Proceedings of the Eighth Conference on Machine Translation
This paper describes the MAKE-NMTVIZ Systems trained for the WMT 2023 Literary task. As a primary submission, we used Train, Valid1, test1 as part of the GuoFeng corpus (Wang et al., 2023) to fine-tune the mBART50 model with Chinese-English data. We followed very similar training parameters to (Lee et al. 2022) when fine-tuning mBART50. We trained for 3 epochs, using gelu as an activation function, with a learning rate of 0.05, dropout of 0.1 and a batch size of 16. We decoded using a beam search of size 5. For our contrastive1 submission, we implemented a fine-tuned concatenation transformer (Lupo et al., 2023). The training was developed in two steps: (i) a sentence-level transformer was implemented for 10 epochs trained using general, test1, and valid1 data (more details in contrastive2 system); (ii) second, we fine-tuned at document-level using 3-sentence concatenation for 4 epochs using train, test2, and valid2 data. During the fine-tuning, we used ReLU as an activation function, with an inverse square root learning rate, dropout of 0.1, and a batch size of 64. We decoded using a beam search of size. Four our contrastive2 and last submission, we implemented a sentence-level transformer model (Vaswani et al., 2017). The model was trained with general data for 10 epochs using general-purpose, test1, and valid 1 data. The training parameters were an inverse square root scheduled learning rate, a dropout of 0.1, and a batch size of 64. We decoded using a beam search of size 4. We then compared the three translation outputs from an interdisciplinary perspective, investigating some of the effects of sentence- vs document-based training. Computer scientists, translators and corpus linguists discussed the linguistic remaining issues for this discourse-level literary translation.
pdf
abs
Translating Dislocations or Parentheticals : Investigating the Role of Prosodic Boundaries for Spoken Language Translation of French into English
Nicolas Ballier
|
Behnoosh Namdarzadeh
|
Maria Zimina
|
Jean-Baptiste Yunès
Proceedings of Machine Translation Summit XIX, Vol. 2: Users Track
This paper examines some of the effects of prosodic boundaries on ASR outputs and Spoken Language Translations into English for two competing French structures (“c’est” dislocation vs. “c’est” parentheticals). One native speaker of French read 104 test sentences that were then submitted to two systems. We compared the outputs of two toolkits, SYSTRAN Pure Neural Server (SPNS9) (Crego et al., 2016) and Whisper. For SPNS9, we compared the translation of the text file used for the reading with the translation of the transcription generated through Vocapia ASR. We also tested the transcription engine for speech recognition uploading an MP3 file and used the same procedure for AI Whisper’s Web-scale Supervised Pretraining for Speech Recognition system (Radford et al., 2022). We reported WER for the transcription tasks and the BLEU scores for the different models. We evidenced the variability of the punctuation in the ASR outputs and discussed it in relation to the duration of the utterance. We discussed the effects of the prosodic boundaries. We described the status of the boundary in the speech-to-text systems, discussing the consequence for the neural machine translation of the rendering of the prosodic boundary by a comma, a full stop, or any other punctuation symbol. We used the reference transcript of the reading phase to compute the edit distance between the reference transcript and the ASR output. We also used textometric analyses with iTrameur (Fleury and Zimina, 2014) for insights into the errors that can be attributed to ASR or to Neural Machine translation.
2022
pdf
Toward a Test Set of Dislocations in Persian for Neural Machine Translation
Behnoosh Namdarzadeh
|
Nicolas Ballier
|
Lichao Zhu
|
Guillaume Wisniewski
|
Jean-Baptiste Yunès
Proceedings of the Third International Workshop on NLP Solutions for Under Resourced Languages (NSURL 2022) co-located with ICNLSP 2022
pdf
abs
The SPECTRANS System Description for the WMT22 Biomedical Task
Nicolas Ballier
|
Jean-baptiste Yunès
|
Guillaume Wisniewski
|
Lichao Zhu
|
Maria Zimina
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper describes the SPECTRANS submission for the WMT 2022 biomedical shared task. We present the results of our experiments using the training corpora and the JoeyNMT (Kreutzer et al., 2019) and SYSTRAN Pure Neural Server/ Advanced Model Studio toolkits for the language directions English to French and French to English. We compare the pre- dictions of the different toolkits. We also use JoeyNMT to fine-tune the model with a selection of texts from WMT, Khresmoi and UFAL data sets. We report our results and assess the respective merits of the different translated texts.
2021
pdf
abs
The SPECTRANS System Description for the WMT21 Terminology Task
Nicolas Ballier
|
Dahn Cho
|
Bilal Faye
|
Zong-You Ke
|
Hanna Martikainen
|
Mojca Pecman
|
Guillaume Wisniewski
|
Jean-Baptiste Yunès
|
Lichao Zhu
|
Maria Zimina-Poirot
Proceedings of the Sixth Conference on Machine Translation
This paper discusses the WMT 2021 terminology shared task from a “meta” perspective. We present the results of our experiments using the terminology dataset and the OpenNMT (Klein et al., 2017) and JoeyNMT (Kreutzer et al., 2019) toolkits for the language direction English to French. Our experiment 1 compares the predictions of the two toolkits. Experiment 2 uses OpenNMT to fine-tune the model. We report our results for the task with the evaluation script but mostly discuss the linguistic properties of the terminology dataset provided for the task. We provide evidence of the importance of text genres across scores, having replicated the evaluation scripts.
2020
pdf
abs
The Learnability of the Annotated Input in NMT Replicating (Vanmassenhove and Way, 2018) with OpenNMT
Nicolas Ballier
|
Nabil Amari
|
Laure Merat
|
Jean-Baptiste Yunès
Proceedings of the Twelfth Language Resources and Evaluation Conference
In this paper, we reproduce some of the experiments related to neural network training for Machine Translation as reported in (Vanmassenhove and Way, 2018). They annotated a sample from the EN-FR and EN-DE Europarl aligned corpora with syntactic and semantic annotations to train neural networks with the Nematus Neural Machine Translation (NMT) toolkit. Following the original publication, we obtained lower BLEU scores than the authors of the original paper, but on a more limited set of annotations. In the second half of the paper, we try to analyze the difference in the results obtained and suggest some methods to improve the results. We discuss the Byte Pair Encoding (BPE) used in the pre-processing phase and suggest feature ablation in relation to the granularity of syntactic and semantic annotations. The learnability of the annotated input is discussed in relation to existing resources for the target languages. We also discuss the feature representation likely to have been adopted for combining features.