Negasi Haile Abadi


2025

pdf bib
A Case Against Implicit Standards: Homophone Normalization in Machine Translation for Languages that use the Ge’ez Script.
Hellina Hailu Nigatu | Atnafu Lambebo Tonja | Henok Biadglign Ademtew | Hizkiel Mitiku Alemayehu | Negasi Haile Abadi | Tadesse Destaw Belay | Seid Muhie Yimam
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Homophone normalization–where characters that have the same sound in a writing script are mapped to one character–is a pre-processing step applied in Amharic Natural Language Processing (NLP) literature. While this may improve performance reported by automatic metrics, it also results in models that are unable to effectively process different forms of writing in a single language. Further, there might be impacts in transfer learning, where models trained on normalized data do not generalize well to other languages. In this paper, we experiment with monolingual training and cross-lingual transfer to understand the impacts of normalization on languages that use the Ge’ez script. We then propose a post-inference intervention in which normalization is applied to model predictions instead of training data. With our simple scheme of post-inference normalization, we show that we can achieve an increase in BLEU score of up to 1.03 while preserving language features in training.

pdf bib
Viability of Machine Translation for Healthcare in Low-Resourced Languages
Hellina Hailu Nigatu | Nikita Mehandru | Negasi Haile Abadi | Blen Gebremeskel | Ahmed Alaa | Monojit Choudhury
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Machine Translation errors in high-stakes settings like healthcare pose unique risks that could lead to clinical harm. The challenges are even more pronounced for low-resourced languages where human translators are scarce and MT tools perform poorly. In this work, we provide a taxonomy of Machine Translation errors for the healthcare domain using a publicly available MT system. Preparing an evaluation dataset from pre-existing medical datasets, we conduct our study focusing on two low-resourced languages: Amharic and Tigrinya. Based on our error analysis and findings from prior work, we test two pre-translation interventions–namely, paraphrasing the source sentence and pivoting with a related language– for their effectiveness in reducing clinical risk. We find that MT errors for healthcare most commonly happen when the source sentence includes medical terminology and procedure descriptions, synonyms, figurative language, and word order differences. We find that pre-translation interventions are not effective in reducing clinical risk if the base translation model performs poorly. Based on our findings, we provide recommendations for improving MT for healthcare.

pdf bib
ProverbEval: Exploring LLM Evaluation Challenges for Low-resource Language Understanding
Israel Abebe Azime | Atnafu Lambebo Tonja | Tadesse Destaw Belay | Yonas Chanie | Bontu Fufa Balcha | Negasi Haile Abadi | Henok Biadglign Ademtew | Mulubrhan Abebe Nerea | Debela Desalegn Yadeta | Derartu Dagne Geremew | Assefa Atsbiha Tesfu | Philipp Slusallek | Thamar Solorio | Dietrich Klakow
Findings of the Association for Computational Linguistics: NAACL 2025