2025
pdf
bib
abs
Limited-Resource Adapters Are Regularizers, Not Linguists
Marcell Fekete
|
Nathaniel Romney Robinson
|
Ernests Lavrinovics
|
Djeride Jean-Baptiste
|
Raj Dabre
|
Johannes Bjerva
|
Heather Lent
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Cross-lingual transfer from related high-resource languages is a well-established strategy to enhance low-resource language technologies. Prior work has shown that adapters show promise for, e.g., improving low-resource machine translation (MT). In this work, we investigate an adapter souping method combined with cross-attention fine-tuning of a pre-trained MT model to leverage language transfer for three low-resource Creole languages, which exhibit relatedness to different language groups across distinct linguistic dimensions. Our approach improves performance substantially over baselines. However, we find that linguistic relatedness—or even a lack thereof—does not covary meaningfully with adapter performance. Surprisingly, our cross-attention fine-tuning approach appears equally effective with randomly initialized adapters, implying that the benefit of adapters in this setting lies in parameter regularization, and not in meaningful information transfer. We provide analysis supporting this regularization hypothesis. Our findings underscore the reality that neural language processing involves many success factors, and that not all neural methods leverage linguistic knowledge in intuitive ways.
pdf
bib
abs
Linguistically Grounded Analysis of Language Models using Shapley Head Values
Marcell Fekete
|
Johannes Bjerva
Findings of the Association for Computational Linguistics: NAACL 2025
Understanding how linguistic knowledge is encoded in language models is crucial for improving their generalisation capabilities. In this paper, we investigate the processing of morphosyntactic phenomena, by leveraging a recently proposed method for probing language models via Shapley Head Values (SHVs). Using the English language BLiMP dataset, we test our approach on two widely used models, BERT and RoBERTa, and compare how linguistic constructions such as anaphor agreement and filler-gap dependencies are handled. Through quantitative pruning and qualitative clustering analysis, we demonstrate that attention heads responsible for processing related linguistic phenomena cluster together. Our results show that SHV-based attributions reveal distinct patterns across both models, providing insights into how language models organize and process linguistic information. These findings support the hypothesis that language models learn subnetworks corresponding to linguistic theory, with potential implications for cross-linguistic model analysis and interpretability in Natural Language Processing (NLP).
2024
pdf
bib
abs
CreoleVal: Multilingual Multitask Benchmarks for Creoles
Heather Lent
|
Kushal Tatariya
|
Raj Dabre
|
Yiyi Chen
|
Marcell Fekete
|
Esther Ploeger
|
Li Zhou
|
Ruth-Ann Armstrong
|
Abee Eijansantos
|
Catriona Malau
|
Hans Erik Heje
|
Ernests Lavrinovics
|
Diptesh Kanojia
|
Paul Belony
|
Marcel Bollmann
|
Loïc Grobol
|
Miryam de Lhoneux
|
Daniel Hershcovich
|
Michel DeGraff
|
Anders Søgaard
|
Johannes Bjerva
Transactions of the Association for Computational Linguistics, Volume 12
Creoles represent an under-explored and marginalized group of languages, with few available resources for NLP research. While the genealogical ties between Creoles and a number of highly resourced languages imply a significant potential for transfer learning, this potential is hampered due to this lack of annotated data. In this work we present CreoleVal, a collection of benchmark datasets spanning 8 different NLP tasks, covering up to 28 Creole languages; it is an aggregate of novel development datasets for reading comprehension relation classification, and machine translation for Creoles, in addition to a practical gateway to a handful of preexisting benchmarks. For each benchmark, we conduct baseline experiments in a zero-shot setting in order to further ascertain the capabilities and limitations of transfer learning for Creoles. Ultimately, we see CreoleVal as an opportunity to empower research on Creoles in NLP and computational linguistics, and in general, a step towards more equitable language technology around the globe.