2025
pdf
bib
abs
AfroCS-xs: Creating a Compact, High-Quality, Human-Validated Code-Switched Dataset for African Languages
Kayode Olaleye
|
Arturo Oncevay
|
Mathieu Sibue
|
Nombuyiselo Zondi
|
Michelle Terblanche
|
Sibongile Mapikitla
|
Richard Lastrucci
|
Charese Smiley
|
Vukosi Marivate
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Code-switching is prevalent in multilingual communities but lacks adequate high-quality data for model development, especially for African languages. To address this, we present AfroCS-xs, a small human-validated synthetic code-switched dataset for four African languages (Afrikaans, Sesotho, Yoruba, isiZulu) and English within a specific domain—agriculture. Using large language models (LLMs), we generate code-switched sentences, including English translations, that are rigorously validated and corrected by native speakers. As a downstream evaluation task, we use this dataset to fine-tune different instruction-tuned LLMs for code-switched translation and compare their performance against machine translation (MT) models. Our results demonstrate that LLMs consistently improve in translation accuracy when fine-tuned on the high-quality AfroCS-xs dataset, highlighting that substantial gains can still be made with a low volume of data. We also observe improvements on natural code-switched and out-of-domain (personal finance) test sets. Overall, regardless of data size and prior exposure to a language, LLMs benefit from higher quality training data when translating code-switched texts in under-represented languages.
2024
pdf
bib
abs
Findings of the 2nd Shared Task on Multi-lingual Multi-task Information Retrieval at MRL 2024
Francesco Tinner
|
Raghav Mantri
|
Mammad Hajili
|
Chiamaka Chukwuneke
|
Dylan Massey
|
Benjamin A. Ajibade
|
Bilge Deniz Kocak
|
Abolade Dawud
|
Jonathan Atala
|
Hale Sirin
|
Kayode Olaleye
|
Anar Rzayev
|
Jafar Isbarov
|
Dursun Dashdamirov
|
David Adelani
|
Duygu Ataman
Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)
Large language models (LLMs) demonstrate exceptional proficiency in both the comprehension and generation of textual data, particularly in English, a language for which extensive public benchmarks have been established across a wide range of natural language processing (NLP) tasks. Nonetheless, their performance in multilingual contexts and specialized domains remains less rigorously validated, raising questions about their reliability and generalizability across linguistically diverse and domain-specific settings. The second edition of the Shared Task on Multilingual Multitask Information Retrieval aims to provide a comprehensive and inclusive multilingual evaluation benchmark which aids assessing the ability of multilingual LLMs to capture logical, factual, or causal relationships within lengthy text contexts and generate language under sparse settings, particularly in scenarios with under-resourced languages. The shared task consists of two subtasks crucial to information retrieval: Named entity recognition (NER) and reading comprehension (RC), in 7 data-scarce languages: Azerbaijani, Swiss German, Turkish and , which previously lacked annotated resources in information retrieval tasks. This year specifally focus on the multiple-choice question answering evaluation setting which provides a more objective setting for comparing different methods across languages.
pdf
bib
abs
Prompting towards Alleviating Code-Switched Data Scarcity in Under-Resourced Languages with GPT as a Pivot
Michelle Terblanche
|
Kayode Olaleye
|
Vukosi Marivate
Proceedings of the 3rd Annual Meeting of the Special Interest Group on Under-resourced Languages @ LREC-COLING 2024
Many multilingual communities, including numerous in Africa, frequently engage in code-switching during conversations. This behaviour stresses the need for natural language processing technologies adept at processing code-switched text. However, data scarcity, particularly in African languages, poses a significant challenge, as many are low-resourced and under-represented. In this study, we prompted GPT 3.5 to generate Afrikaans–English and Yoruba–English code-switched sentences, enhancing diversity using topic-keyword pairs, linguistic guidelines, and few-shot examples. Our findings indicate that the quality of generated sentences for languages using non-Latin scripts, like Yoruba, is considerably lower when compared with the high Afrikaans–English success rate. There is therefore a notable opportunity to refine prompting guidelines to yield sentences suitable for the fine-tuning of language models. We propose a framework for augmenting the diversity of synthetically generated code-switched data using GPT and propose leveraging this technology to mitigate data scarcity in low-resourced languages, underscoring the essential role of native speakers in this process.
2023
pdf
bib
Findings of the 1st Shared Task on Multi-lingual Multi-task Information Retrieval at MRL 2023
Francesco Tinner
|
David Ifeoluwa Adelani
|
Chris Emezue
|
Mammad Hajili
|
Omer Goldman
|
Muhammad Farid Adilazuarda
|
Muhammad Dehan Al Kautsar
|
Aziza Mirsaidova
|
Müge Kural
|
Dylan Massey
|
Chiamaka Chukwuneke
|
Chinedu Mbonu
|
Damilola Oluwaseun Oloyede
|
Kayode Olaleye
|
Jonathan Atala
|
Benjamin A. Ajibade
|
Saksham Bassi
|
Rahul Aralikatte
|
Najoung Kim
|
Duygu Ataman
Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)