2024
pdf
abs
Evaluating Self-Supervised Speech Representations for Indigenous American Languages
Chih-Chen Chen
|
William Chen
|
Rodolfo Joel Zevallos
|
John E. Ortega
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
The application of self-supervision to speech representation learning has garnered significant interest in recent years, due to its scalability to large amounts of unlabeled data. However, much progress, both in terms of pre-training and downstream evaluation, has remained concentrated in monolingual models that only consider English. Few models consider other languages, and even fewer consider indigenous ones. In this work, benchmark the efficacy of large SSL models on 6 indigenous America languages: Quechua, Guarani , Bribri, Kotiria, Wa’ikhana, and Totonac on low-resource ASR. Our results show surprisingly strong performance by state-of-the-art SSL models, showing the potential generalizability of large-scale models to real-world data.
pdf
abs
Related Work Is All You Need
Rodolfo Joel Zevallos
|
John E. Ortega
|
Benjamin Irving
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
In modern times, generational artificial intelligence is used in several industries and by many people. One use case that can be considered important but somewhat redundant is the act of searching for related work and other references to cite. As an avenue to better ascertain the value of citations and their corresponding locations, we focus on the common “related work” section as a focus of experimentation with the overall objective to generate the section. In this article, we present a corpus with 400k annotations of that distinguish related work from the rest of the references. Additionally, we show that for the papers in our experiments, the related work section represents the paper just as good, and in many cases, better than the rest of the references. We show that this is the case for more than 74% of the articles when using cosine similarity to measure the distance between two common graph neural network algorithms: Prone and Specter.
pdf
abs
QUESPA Submission for the IWSLT 2024 Dialectal and Low-resource Speech Translation Task
John E. Ortega
|
Rodolfo Joel Zevallos
|
Ibrahim Said Ahmad
|
William Chen
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
This article describes the QUESPA team speech translation (ST) submissions for the Quechua to Spanish (QUE–SPA) track featured in the Evaluation Campaign of IWSLT 2024: dialectal and low-resource speech translation. Two main submission types were supported in the campaign: constrained and unconstrained. This is our second year submitting our ST systems to the IWSLT shared task and we feel that we have achieved novel performance, surpassing last year’s submissions. Again, we were able to submit six total systems of which our best (primary) constrained system consisted of an ST model based on the Fairseq S2T framework where the audio representations were created using log mel-scale filter banks as features and the translations were performed using a transformer. The system was similar to last year’s submission with slight configuration changes, allowing us to achieve slightly higher performance (2 BLEU). Contrastingly, we were able to achieve much better performance than last year on the unconstrained task using a larger pre-trained language (PLM) model for ST (without cascading) and the inclusion of parallel QUE–SPA data found on the internet. The fine-tuning of Microsoft’s SpeechT5 model in a ST setting along with the addition of new data and a data augmentation technique allowed us to achieve 19.7 BLEU. Additionally, we present the other four submissions (2 constrained and 2 unconstrained) which are part of additional efforts of hyper-parameter and configuration tuning on existent models and the inclusion of Whisper for speech recognition
pdf
Is it safe to machine translate suicide-related language from English to Galician?
John E. Ortega
|
Annika Marie Schoene
Proceedings of the 16th International Conference on Computational Processing of Portuguese - Vol. 1
2023
pdf
abs
A Research-Based Guide for the Creation and Deployment of a Low-Resource Machine Translation System
John E. Ortega
|
Kenneth Church
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
The machine translation (MT) field seems to focus heavily on English and other high-resource languages. Though, low-resource MT (LRMT) is receiving more attention than in the past. Successful LRMT systems (LRMTS) should make a compelling business case in terms of demand, cost and quality in order to be viable for end users. When used by communities where low-resource languages are spoken, LRMT quality should not only be determined by the use of traditional metrics like BLEU, but it should also take into account other factors in order to be inclusive and not risk overall rejection by the community. MT systems based on neural methods tend to perform better with high volumes of training data, but they may be unrealistic and even harmful for LRMT. It is obvious that for research purposes, the development and creation of LRMTS is necessary. However, in this article, we argue that two main workarounds could be considered by companies that are considering deployment of LRMTS in the wild: human-in-the-loop and sub-domains.
pdf
abs
Classification of US Supreme Court Cases Using BERT-Based Techniques
Shubham Vatsal
|
Adam Meyers
|
John E. Ortega
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
Models based on bidirectional encoder representations from transformers (BERT) produce state of the art (SOTA) results on many natural language processing (NLP) tasks such as named entity recognition (NER), part-of-speech (POS) tagging etc. An interesting phenomenon occurs when classifying long documents such as those from the US supreme court where BERT-based models can be considered difficult to use on a first-pass or out-of-the-box basis. In this paper, we experiment with several BERT-based classification techniques for US supreme court decisions or supreme court database (SCDB) and compare them with the previous SOTA results. We then compare our results specifically with SOTA models for long documents. We compare our results for two classification tasks: (1) a broad classification task with 15 categories and (2) a fine-grained classification task with 279 categories. Our best result produces an accuracy of 80% on the 15 broad categories and 60% on the fine-grained 279 categories which marks an improvement of 8% and 28% respectively from previously reported SOTA results.
pdf
bib
abs
FINDINGS OF THE IWSLT 2023 EVALUATION CAMPAIGN
Milind Agarwal
|
Sweta Agrawal
|
Antonios Anastasopoulos
|
Luisa Bentivogli
|
Ondřej Bojar
|
Claudia Borg
|
Marine Carpuat
|
Roldano Cattoni
|
Mauro Cettolo
|
Mingda Chen
|
William Chen
|
Khalid Choukri
|
Alexandra Chronopoulou
|
Anna Currey
|
Thierry Declerck
|
Qianqian Dong
|
Kevin Duh
|
Yannick Estève
|
Marcello Federico
|
Souhir Gahbiche
|
Barry Haddow
|
Benjamin Hsu
|
Phu Mon Htut
|
Hirofumi Inaguma
|
Dávid Javorský
|
John Judge
|
Yasumasa Kano
|
Tom Ko
|
Rishu Kumar
|
Pengwei Li
|
Xutai Ma
|
Prashant Mathur
|
Evgeny Matusov
|
Paul McNamee
|
John P. McCrae
|
Kenton Murray
|
Maria Nadejde
|
Satoshi Nakamura
|
Matteo Negri
|
Ha Nguyen
|
Jan Niehues
|
Xing Niu
|
Atul Kr. Ojha
|
John E. Ortega
|
Proyag Pal
|
Juan Pino
|
Lonneke van der Plas
|
Peter Polák
|
Elijah Rippeth
|
Elizabeth Salesky
|
Jiatong Shi
|
Matthias Sperber
|
Sebastian Stüker
|
Katsuhito Sudoh
|
Yun Tang
|
Brian Thompson
|
Kevin Tran
|
Marco Turchi
|
Alex Waibel
|
Mingxuan Wang
|
Shinji Watanabe
|
Rodolfo Zevallos
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
This paper reports on the shared tasks organized by the 20th IWSLT Conference. The shared tasks address 9 scientific challenges in spoken language translation: simultaneous and offline translation, automatic subtitling and dubbing, speech-to-speech translation, multilingual, dialect and low-resource speech translation, and formality control. The shared tasks attracted a total of 38 submissions by 31 teams. The growing interest towards spoken language translation is also witnessed by the constantly increasing number of shared task organizers and contributors to the overview paper, almost evenly distributed across industry and academia.
pdf
abs
QUESPA Submission for the IWSLT 2023 Dialect and Low-resource Speech Translation Tasks
John E. Ortega
|
Rodolfo Zevallos
|
William Chen
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)
This article describes the QUESPA team speech translation (ST) submissions for the Quechua to Spanish (QUE–SPA) track featured in the Evaluation Campaign of IWSLT 2023: low-resource and dialect speech translation. Two main submission types were supported in the campaign: constrained and unconstrained. We submitted six total systems of which our best (primary) constrained system consisted of an ST model based on the Fairseq S2T framework where the audio representations were created using log mel-scale filter banks as features and the translations were performed using a transformer. The best (primary) unconstrained system used a pipeline approach which combined automatic speech recognition (ASR) with machine translation (MT). The ASR transcriptions for the best unconstrained system were computed using a pre-trained XLS-R-based model along with a fine-tuned language model. Transcriptions were translated using a MT system based on a fine-tuned, pre-trained language model (PLM). The four other submissions are presented in this article (2 constrained and 2 unconstrained) for comparison because they consist of various architectures. Our results show that direct ST (ASR and MT combined together) can be more effective than a PLM in a low-resource (constrained) setting for Quechua to Spanish. On the other hand, we show that fine-tuning of any type on both the ASR and MT system is worthwhile, resulting in nearly 16 BLEU for the unconstrained task.
pdf
abs
Meeting the Needs of Low-Resource Languages: The Value of Automatic Alignments via Pretrained Models
Abteen Ebrahimi
|
Arya D. McCarthy
|
Arturo Oncevay
|
John E. Ortega
|
Luis Chiruzzo
|
Gustavo Giménez-Lugo
|
Rolando Coto-Solano
|
Katharina Kann
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Large multilingual models have inspired a new class of word alignment methods, which work well for the model’s pretraining languages. However, the languages most in need of automatic alignment are low-resource and, thus, not typically included in the pretraining data. In this work, we ask: How do modern aligners perform on unseen languages, and are they better than traditional methods? We contribute gold-standard alignments for Bribri–Spanish, Guarani–Spanish, Quechua–Spanish, and Shipibo-Konibo–Spanish. With these, we evaluate state-of-the-art aligners with and without model adaptation to the target language. Finally, we also evaluate the resulting alignments extrinsically through two downstream tasks: named entity recognition and part-of-speech tagging. We find that although transformer-based methods generally outperform traditional models, the two classes of approach remain competitive with each other.
pdf
abs
Findings of the AmericasNLP 2023 Shared Task on Machine Translation into Indigenous Languages
Abteen Ebrahimi
|
Manuel Mager
|
Shruti Rijhwani
|
Enora Rice
|
Arturo Oncevay
|
Claudia Baltazar
|
María Cortés
|
Cynthia Montaño
|
John E. Ortega
|
Rolando Coto-solano
|
Hilaria Cruz
|
Alexis Palmer
|
Katharina Kann
Proceedings of the Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP)
In this work, we present the results of the AmericasNLP 2023 Shared Task on Machine Translation into Indigenous Languages of the Americas. This edition of the shared task featured eleven language pairs, one of which – Chatino-Spanish – uses a newly collected evaluation dataset, consisting of professionally translated text from the legal domain. Seven teams participated in the shared task, with a total of 181 submissions. Additionally, we conduct a human evaluation of the best system outputs, and compare them to the best submissions from the prior shared task. We find that this analysis agrees with the quantitative measures used to rank submissions, which shows further improvements of 9.64 ChrF on average across all languages, when compared to the prior winning system.
pdf
abs
Findings of the CoCo4MT 2023 Shared Task on Corpus Construction for Machine Translation
Ananya Ganesh
|
Marine Carpuat
|
William Chen
|
Katharina Kann
|
Constantine Lignos
|
John E. Ortega
|
Jonne Saleva
|
Shabnam Tafreshi
|
Rodolfo Zevallos
Proceedings of the Second Workshop on Corpus Generation and Corpus Augmentation for Machine Translation
This paper provides an overview of the first shared task on choosing beneficial instances for machine translation, conducted as part of the CoCo4MT 2023 Workshop at MTSummit. This shared task was motivated by the need to make the data annotation process for machine translation more efficient, particularly for low-resource languages for which collecting human translations may be difficult or expensive. The task involved developing methods for selecting the most beneficial instances for training a machine translation system without access to an existing parallel dataset in the target language, such that the best selected instances can then be manually translated. Two teams participated in the shared task, namely the Williams team and the AST team. Submissions were evaluated by training a machine translation model on each submission’s chosen instances, and comparing their performance with the chRF++ score. The system that ranked first is by the Williams team, that finds representative instances by clustering the training data.
2022
pdf
abs
WordNet-QU: Development of a Lexical Database for Quechua Varieties
Nelsi Melgarejo
|
Rodolfo Zevallos
|
Hector Gomez
|
John E. Ortega
Proceedings of the 29th International Conference on Computational Linguistics
In the effort to minimize the risk of extinction of a language, linguistic resources are fundamental. Quechua, a low-resource language from South America, is a language spoken by millions but, despite several efforts in the past, still lacks the resources necessary to build high-performance computational systems. In this article, we present WordNet-QU which signifies the inclusion of Quechua in a well-known lexical database called wordnet. We propose WordNet-QU to be included as an extension to wordnet after demonstrating a manually-curated collection of multiple digital resources for lexical use in Quechua. Our work uses the synset alignment algorithm to compare Quechua to its geographically nearest high-resource language, Spanish. Altogether, we propose a total of 28,582 unique synset IDs divided according to region like so: 20510 for Southern Quechua, 5993 for Central Quechua, 1121 for Northern Quechua, and 958 for Amazonian Quechua.
pdf
bib
Proceedings of the 15th biennial conference of the Association for Machine Translation in the Americas (Workshop 2: Corpus Generation and Corpus Augmentation for Machine Translation)
John E. Ortega
|
Marine Carpuat
|
William Chen
|
Katharina Kann
|
Constantine Lignos
|
Maja Popovic
|
Shabnam Tafreshi
Proceedings of the 15th biennial conference of the Association for Machine Translation in the Americas (Workshop 2: Corpus Generation and Corpus Augmentation for Machine Translation)
pdf
abs
The Nós Project: Opening routes for the Galician language in the field of language technologies
Iria de-Dios-Flores
|
Carmen Magariños
|
Adina Ioana Vladu
|
John E. Ortega
|
José Ramom Pichel
|
Marcos García
|
Pablo Gamallo
|
Elisa Fernández Rei
|
Alberto Bugarín-Diz
|
Manuel González González
|
Senén Barro
|
Xosé Luis Regueira
Proceedings of the Workshop Towards Digital Language Equality within the 13th Language Resources and Evaluation Conference
The development of language technologies (LTs) such as machine translation, text analytics, and dialogue systems is essential in the current digital society, culture and economy. These LTs, widely supported in languages in high demand worldwide, such as English, are also necessary for smaller and less economically powerful languages, as they are a driving force in the democratization of the communities that use them due to their great social and cultural impact. As an example, dialogue systems allow us to communicate with machines in our own language; machine translation increases access to contents in different languages, thus facilitating intercultural relations; and text-to-speech and speech-to-text systems broaden different categories of users’ access to technology. In the case of Galician (co-official language, together with Spanish, in the autonomous region of Galicia, located in northwestern Spain), incorporating the language into state-of-the-art AI applications can not only significantly favor its prestige (a decisive factor in language normalization), but also guarantee citizens’ language rights, reduce social inequality, and narrow the digital divide. This is the main motivation behind the Nós Project (Proxecto Nós), which aims to have a significant contribution to the development of LTs in Galician (currently considered a low-resource language) by providing openly licensed resources, tools, and demonstrators in the area of intelligent technologies.
2021
pdf
abs
Love Thy Neighbor: Combining Two Neighboring Low-Resource Languages for Translation
John E. Ortega
|
Richard Alexander Castro Mamani
|
Jaime Rafael Montoya Samame
Proceedings of the 4th Workshop on Technologies for MT of Low Resource Languages (LoResMT2021)
Low-resource languages sometimes take on similar morphological and syntactic characteristics due to their geographic nearness and shared history. Two low-resource neighboring languages found in Peru, Quechua and Ashaninka, can be considered, at first glance, two languages that are morphologically similar. In order to translate the two languages, various approaches have been taken. For Quechua, neural machine transfer-learning has been used along with byte-pair encoding. For Ashaninka, the language of the two with fewer resources, a finite-state transducer is used to transform Ashaninka texts and its dialects for machine translation use. We evaluate and compare two approaches by attempting to use newly-formed Ashaninka corpora for neural machine translation. Our experiments show that combining the two neighboring languages, while similar in morphology, word sharing, and geographical location, improves Ashaninka– Spanish translation but degrades Quechua–Spanish translations.
2020
pdf
bib
abs
Overcoming Resistance: The Normalization of an Amazonian Tribal Language
John E Ortega
|
Richard Alexander Castro-Mamani
|
Jaime Rafael Montoya Samame
Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages
Languages can be considered endangered for many reasons. One of the principal reasons for endangerment is the disappearance of its speakers. Another, more identifiable reason, is the lack of written resources. We present an automated sub-segmentation system called AshMorph that deals with the morphology of an Amazonian tribal language called Ashaninka which is at risk of being endangered due to the lack of availability (or resistance) of native speakers and the absence of written resources. We show that by the use of a cross-lingual lexicon and finite state transducers we can increase accuracy by more than 30% when compared to other modern sub-segmentation tools. Our results, made freely available on-line, are verified by an Ashaninka speaker and perform well in two distinct domains, everyday literary articles and the bible. This research serves as a first step in helping to preserve Ashaninka by offering a sub-segmentation process that can be used to normalize any Ashaninka text which will serve as input to a machine translation system for translation into other high-resource languages spoken by higher populated locations like Spanish and Portuguese in the case of Peru and Brazil where Ashaninka is mostly spoken.
pdf
bib
Proceedings of 1st Workshop on Post-Editing in Modern-Day Translation
John E. Ortega
|
Marcello Federico
|
Constantin Orasan
|
Maja Popovic
Proceedings of 1st Workshop on Post-Editing in Modern-Day Translation
2018
pdf
abs
Letting a Neural Network Decide Which Machine Translation System to Use for Black-Box Fuzzy-Match Repair
John E. Ortega
|
Weiyi Lu
|
Adam Meyers
|
Kyunghyun Cho
Proceedings of the 21st Annual Conference of the European Association for Machine Translation
While systems using the Neural Network-based Machine Translation (NMT) paradigm achieve the highest scores on recent shared tasks, phrase-based (PBMT) systems, rule-based (RBMT) systems and other systems may get better results for individual examples. Therefore, combined systems should achieve the best results for MT, particularly if the system combination method can take advantage of the strengths of each paradigm. In this paper, we describe a system that predicts whether a NMT, PBMT or RBMT will get the best Spanish translation result for a particular English sentence in DGT-TM 20161. Then we use fuzzy-match repair (FMR) as a mechanism to show that the combined system outperforms individual systems in a black-box machine translation setting.
2014
pdf
abs
Using any machine translation source for fuzzy-match repair in a computer-aided translation setting
John E. Ortega
|
Felipe Sánchez-Martinez
|
Mikel L. Forcada
Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Researchers Track
When a computer-assisted translation (CAT) tool does not find an exact match for the source segment to translate in its translation memory (TM), translators must use fuzzy matches that come from translation units in the translation memory that do not completely match the source segment. We explore the use of a fuzzy-match repair technique called patching to repair translation proposals from a TM in a CAT environment using any available machine translation system, or any external bilingual source, regardless of its internals. Patching attempts to aid CAT tool users by repairing fuzzy matches and proposing improved translations. Our results show that patching improves the quality of translation proposals and reduces the amount of edit operations to perform, especially when a specific set of restrictions is applied.