Evaluating Robustness of LLMs to Typographical Noise in Yorùbá QA

Paul Okewunmi, Favour James, Oluwadunsin Fajemila


Abstract
Generative AI models are primarily accessed through chat interfaces, where user queries often contain typographical errors. While these models perform well in English, their robustness to noisy inputs in low-resource languages like Yorùbá remains underexplored. This work investigates a Yorùbá question-answering (QA) task by introducing synthetic typographical noise into clean inputs. We design a probabilistic noise injection strategy that simulates realistic human typos. In our experiments, each character in a clean sentence is independently altered, with noise levels ranging from 10% to 40%. We evaluate performance across three strong multilingual models using two complementary metrics: (1) a multilingual BERTScore to assess semantic similarity between outputs on clean and noisy inputs, and (2) an LLM-as-judge approach, where the best Yorùbá-capable model rates fluency, comprehension, and accuracy on a 1–5 scale. Results show that while English QA performance degrades gradually, Yorùbá QA suffers a sharper decline. At 40% noise, GPT-4o experiences over a 50% drop in comprehension ability, with similar declines for Gemini 2.0 Flash and Claude 3.7 Sonnet. We conclude with recommendations for noise-aware training and dedicated noisy Yorùbá benchmarks to enhance LLM robustness in low-resource settings.
Anthology ID:
2025.africanlp-1.29
Volume:
Proceedings of the Sixth Workshop on African Natural Language Processing (AfricaNLP 2025)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Constantine Lignos, Idris Abdulmumin, David Adelani
Venues:
AfricaNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
195–202
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.africanlp-1.29/
DOI:
Bibkey:
Cite (ACL):
Paul Okewunmi, Favour James, and Oluwadunsin Fajemila. 2025. Evaluating Robustness of LLMs to Typographical Noise in Yorùbá QA. In Proceedings of the Sixth Workshop on African Natural Language Processing (AfricaNLP 2025), pages 195–202, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Evaluating Robustness of LLMs to Typographical Noise in Yorùbá QA (Okewunmi et al., AfricaNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.africanlp-1.29.pdf