2025
pdf
bib
abs
Findings of the AmericasNLP 2025 Shared Tasks on Machine Translation, Creation of Educational Material, and Translation Metrics for Indigenous Languages of the Americas
Ona De Gibert
|
Robert Pugh
|
Ali Marashian
|
Raul Vazquez
|
Abteen Ebrahimi
|
Pavel Denisov
|
Enora Rice
|
Edward Gow-Smith
|
Juan Prieto
|
Melissa Robles
|
Rubén Manrique
|
Oscar Moreno
|
Angel Lino
|
Rolando Coto-Solano
|
Aldo Alvarez
|
Marvin Agüero-Torales
|
John E. Ortega
|
Luis Chiruzzo
|
Arturo Oncevay
|
Shruti Rijhwani
|
Katharina Von Der Wense
|
Manuel Mager
Proceedings of the Fifth Workshop on NLP for Indigenous Languages of the Americas (AmericasNLP)
This paper presents the findings of the AmericasNLP 2025 Shared Tasks: (1) machine translation for truly low-resource languages, (2) morphological adaptation for generating educational examples, and (3) developing metrics for machine translation in Indigenous languages. The shared tasks cover 14 diverse Indigenous languages of the Americas. A total of 11 teams participated, submitting 26 systems across all tasks, languages, and models. We describe the shared tasks, introduce the datasets and evaluation metrics used, summarize the baselines and submitted systems, and report our findings.
pdf
bib
abs
From Priest to Doctor: Domain Adaptation for Low-Resource Neural Machine Translation
Ali Marashian
|
Enora Rice
|
Luke Gessler
|
Alexis Palmer
|
Katharina von der Wense
Proceedings of the 31st International Conference on Computational Linguistics
Many of the world’s languages have insufficient data to train high-performing general neural machine translation (NMT) models, let alone domain-specific models, and often the only available parallel data are small amounts of religious texts. Hence, domain adaptation (DA) is a crucial issue faced by contemporary NMT and has, so far, been underexplored for low-resource languages. In this paper, we evaluate a set of methods from both low-resource NMT and DA in a realistic setting, in which we aim to translate between a high-resource and a low-resource language with access to only: a) parallel Bible data, b) a bilingual dictionary, and c) a monolingual target-domain corpus in the high-resource language. Our results show that the effectiveness of the tested methods varies, with the simplest one, DALI, being most effective. We follow up with a small human evaluation of DALI, which shows that there is still a need for more careful investigation of how to accomplish DA for low-resource NMT.
pdf
bib
abs
Untangling the Influence of Typology, Data, and Model Architecture on Ranking Transfer Languages for Cross-Lingual POS Tagging
Enora Rice
|
Ali Marashian
|
Hannah Haynie
|
Katharina Wense
|
Alexis Palmer
Proceedings of the 1st Workshop on Language Models for Underserved Communities (LM4UC 2025)
Cross-lingual transfer learning is an invaluable tool for overcoming data scarcity, yet selecting a suitable transfer language remains a challenge. The precise roles of linguistic typology, training data, and model architecture in transfer language choice are not fully understood. We take a holistic approach, examining how both dataset-specific and fine-grained typological features influence transfer language selection for part-of-speech tagging, considering two different sources for morphosyntactic features. While previous work examines these dynamics in the context of bilingual biLSTMS, we extend our analysis to a more modern transfer learning pipeline: zero-shot prediction with pretrained multilingual models. We train a series of transfer language ranking systems and examine how different feature inputs influence ranker performance across architectures. Word overlap, type-token ratio, and genealogical distance emerge as top features across all architectures. Our findings reveal that a combination of typological and dataset-dependent features leads to the best rankings, and that good performance can be obtained with either feature group on its own.
2024
pdf
bib
abs
TAMS: Translation-Assisted Morphological Segmentation
Enora Rice
|
Ali Marashian
|
Luke Gessler
|
Alexis Palmer
|
Katharina von der Wense
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Canonical morphological segmentation is the process of analyzing words into the standard (aka underlying) forms of their constituent morphemes.This is a core task in endangered language documentation, and NLP systems have the potential to dramatically speed up this process. In typical language documentation settings, training data for canonical morpheme segmentation is scarce, making it difficult to train high quality models. However, translation data is often much more abundant, and, in this work, we present a method that attempts to leverage translation data in the canonical segmentation task. We propose a character-level sequence-to-sequence model that incorporates representations of translations obtained from pretrained high-resource monolingual language models as an additional signal. Our model outperforms the baseline in a super-low resource setting but yields mixed results on training splits with more data. Additionally, we find that we can achieve strong performance even without needing difficult-to-obtain word level alignments. While further work is needed to make translations useful in higher-resource settings, our model shows promise in severely resource-constrained settings.
pdf
bib
abs
On the Robustness of Neural Models for Full Sentence Transformation
Michael Ginn
|
Ali Marashian
|
Bhargav Shandilya
|
Claire Post
|
Enora Rice
|
Juan Vásquez
|
Marie Mcgregor
|
Matthew Buchholz
|
Mans Hulden
|
Alexis Palmer
Proceedings of the 4th Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP 2024)
This paper describes the LECS Lab submission to the AmericasNLP 2024 Shared Task on the Creation of Educational Materials for Indigenous Languages. The task requires transforming a base sentence with regards to one or more linguistic properties (such as negation or tense). We observe that this task shares many similarities with the well-studied task of word-level morphological inflection, and we explore whether the findings from inflection research are applicable to this task. In particular, we experiment with a number of augmentation strategies, finding that they can significantly benefit performance, but that not all augmented data is necessarily beneficial. Furthermore, we find that our character-level neural models show high variability with regards to performance on unseen data, and may not be the best choice when training data is limited.
pdf
bib
abs
GlossLM: A Massively Multilingual Corpus and Pretrained Model for Interlinear Glossed Text
Michael Ginn
|
Lindia Tjuatja
|
Taiqi He
|
Enora Rice
|
Graham Neubig
|
Alexis Palmer
|
Lori Levin
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Language documentation projects often involve the creation of annotated text in a format such as interlinear glossed text (IGT), which captures fine-grained morphosyntactic analyses in a morpheme-by-morpheme format. However, there are few existing resources providing large amounts of standardized, easily accessible IGT data, limiting their applicability to linguistic research, and making it difficult to use such data in NLP modeling. We compile the largest existing corpus of IGT data from a variety of sources, covering over 450k examples across 1.8k languages, to enable research on crosslingual transfer and IGT generation. We normalize much of our data to follow a standard set of labels across languages.Furthermore, we explore the task of automatically generating IGT in order to aid documentation projects. As many languages lack sufficient monolingual data, we pretrain a large multilingual model on our corpus. We demonstrate the utility of this model by finetuning it on monolingual corpora, outperforming SOTA models by up to 6.6%. Our pretrained model and dataset are available on Hugging Face: https://huggingface.co/collections/lecslab/glosslm-66da150854209e910113dd87
2023
pdf
bib
Proceedings of the Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP)
Manuel Mager
|
Abteen Ebrahimi
|
Arturo Oncevay
|
Enora Rice
|
Shruti Rijhwani
|
Alexis Palmer
|
Katharina Kann
Proceedings of the Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP)
pdf
bib
abs
Findings of the AmericasNLP 2023 Shared Task on Machine Translation into Indigenous Languages
Abteen Ebrahimi
|
Manuel Mager
|
Shruti Rijhwani
|
Enora Rice
|
Arturo Oncevay
|
Claudia Baltazar
|
María Cortés
|
Cynthia Montaño
|
John E. Ortega
|
Rolando Coto-solano
|
Hilaria Cruz
|
Alexis Palmer
|
Katharina Kann
Proceedings of the Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP)
In this work, we present the results of the AmericasNLP 2023 Shared Task on Machine Translation into Indigenous Languages of the Americas. This edition of the shared task featured eleven language pairs, one of which – Chatino-Spanish – uses a newly collected evaluation dataset, consisting of professionally translated text from the legal domain. Seven teams participated in the shared task, with a total of 181 submissions. Additionally, we conduct a human evaluation of the best system outputs, and compare them to the best submissions from the prior shared task. We find that this analysis agrees with the quantitative measures used to rank submissions, which shows further improvements of 9.64 ChrF on average across all languages, when compared to the prior winning system.