Mikel Segura Elizalde


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2025

pdf bib
FSTs vs ICL: Generalisation in LLMs for an under-resourced language
Ximena Gutierrez | Mikel Segura Elizalde | Victor Mijangos
Findings of the Association for Computational Linguistics: EMNLP 2025

LLMs have been widely adopted to tackle many traditional NLP tasks. Their effectiveness remains uncertain in scenarios where pre-trained models have limited prior knowledge of a language. In this work, we examine LLMs’ generalization in under-resourced settings through the task of orthographic normalization across Otomi language variants. We develop two approaches: a rule-based method using a finite-state transducer (FST) and an in-context learning (ICL) method that provides the model with string transduction examples. We compare the performance of FSTs and neural approaches in low-resource scenarios, providing insights into their potential and limitations. Our results show that while FSTs outperform LLMs in zero-shot settings, ICL enables LLMs to surpass FSTs, stressing the importance of combining linguistic expertise with machine learning in current approaches for low-resource scenarios.