Proceedings of the The 22nd SIGMORPHON workshop on Computational Morphology, Phonology, and Phonetics

Garrett Nicolai, Eleanor Chodroff, Frederic Mailhot, Çağrı Çöltekin (Editors)


Anthology ID:
2025.sigmorphon-main
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico, USA
Venues:
SIGMORPHON | WS
SIG:
SIGMORPHON
Publisher:
Association for Computational Linguistics
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.sigmorphon-main/
DOI:
ISBN:
979-8-89176-231-2
Bib Export formats:
BibTeX
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.sigmorphon-main.pdf

pdf bib
Proceedings of the The 22nd SIGMORPHON workshop on Computational Morphology, Phonology, and Phonetics
Garrett Nicolai | Eleanor Chodroff | Frederic Mailhot | Çağrı Çöltekin

pdf bib
Prompt and circumstance”:" A word-by-word LLM prompting approach to interlinear glossing for low-resource languages
Micha Elsner | David Liu

This paper presents VeLePa, an inflected verbal lexicon of Central Pame (pbs, cent2154), an Otomanguean language from Mexico. This resource contains 12528 words in phonological form representing the complete inflectional paradigms of 216 verbs, supplemented with use frequencies. Computer-operable (CLDF) inflected lexicons of non-WEIRD underresourced languages are urgently needed to expand digital capacities in this languages (e.g. in NLP). VeLePa contributes to this, and does so with data from a language which is morphologically extraordinary, with unusually high levels of irregularity and multiple conjugations at various loci within the word”:" prefixes, stems, tone, and suffixes constitute different albeit interrelated subsystems of inflection. Partly automated creation of interlinear glossed text (IGT) has the potential to assist in linguistic documentation. We argue that LLMs can make this process more accessible to linguists because of their capacity to follow natural-language instructions. We investigate the effectiveness of a retrieval-based LLM prompting approach to glossing, applied to the seven languages from the SIGMORPHON 2023 shared task. Our system beats the BERTbased shared task baseline for every language in the morpheme-level score category, and we show that a simple 3-best oracle has higher word-level scores than the challenge winner (a tuned sequence model) in five languages. In a case study on Tsez, we ask the LLM to automatically create and follow linguistic instructions, reducing errors on a confusing grammatical feature. Our results thus demonstrate the potential contributions which LLMs can make in interactive systems for glossing, both in making suggestions to human annotators and following directions.

pdf bib
West Germanic noun-noun compounds and the morphology-syntax trade-off
Pablo Mosteiro | Damián Blasi | Denis Paperno

This paper examines the linguistic distinction between syntax and morphology, focusing on noun-noun compounds in three West Germanic languages (English, Dutch, and German). Previous studies using the Parallel Bible Corpus have found a trade-off between word order (syntax) and word structure (morphology), with languages optimizing information conveyance through these systems. Our research question is whether manipulating English noun-noun compounds to resemble Dutch and German constructions can reproduce the observed distance between these languages in the order-structure plane. We extend a word-pasting procedure to merge increasingly common noun-noun pairs in English Bible translations. After each merge, we estimate the information contained in word order and word structure using entropy calculations. Our results show that pasting noun-noun pairs reduces the difference between English and the other languages, suggesting that orthographic conventions defining word boundaries play a role in this distinction. However, the effect is not pronounced, and results are statistically inconclusive.

pdf bib
The Impact of Dialect Variation on Robust Automatic Speech Recognition for Catalan
Zachary Hopton | Eleanor Chodroff

To accurately transcribe a speech signal, automatic speech recognition (ASR) systems must show robustness to a wide range of task independent variation, such as speaker factors, recording quality, or even ädversarial noisedesigned to disrupt performance.We manipulated the dialect composition of fine-tuning data for ASR to study whether balancing the relative proportion of dialects had an impact on models robustness to two such sources of variation”:" dialect variation and adversarial perturbations. We fine-tuned XLSR-53 for Catalan ASR using four different dialect compositions, each containing the Central Catalan dialect. These were defined as 100%, 80%, 50%, and 20% Central Catalan, with the remaining portions split evenly between four other Catalan dialects. While increasing the relative proportion of dialect variants improved models’ dialect robustness, this did not have a meaningful impact on adversarial robustness. These findings suggest that while improvements to ASR can be made by diversifying the training data, such changes do not sufficiently counteract adversarial attacks, leaving the technology open to security threats.

pdf bib
Probing Neural Network Generalization using Default Patterns
Brandon Prickett | Tianyi Nyu | Katya Pertsova

Whether neural-net models can learn minoritydefault patterns has been a matter of some controversy. Results based on modeling real human language data are hard to interpret due to complexity. Therefore, we examine the learning of a simple artificial language pattern involving defaults using three computational models”:" an Encoder-Decoder RNN, a Transformer Encoder, and a Logistic Regression. Overall, we find that the models have the hardest time with minority defaults, but can eventually learn them and apply them to novel words (although not always extend them to completely novel segments or novel CV-sequences). Typefrequency has the largest effect on learning in all models, trumping the effect of distribution. We examine the weights of two models to provide further insights into how defaults are represented inside the models.