Jian Yun Zhuang


2025

pdf bib
INJONGO: A Multicultural Intent Detection and Slot-filling Dataset for 16 African Languages
Hao Yu | Jesujoba Oluwadara Alabi | Andiswa Bukula | Jian Yun Zhuang | En-Shiun Annie Lee | Tadesse Kebede Guge | Israel Abebe Azime | Happy Buzaaba | Blessing Kudzaishe Sibanda | Godson Koffi Kalipe | Jonathan Mukiibi | Salomon Kabongo Kabenamualu | Mmasibidi Setaka | Lolwethu Ndolela | Nkiruka Odu | Rooweither Mabuya | Shamsuddeen Hassan Muhammad | Salomey Osei | Sokhar Samb | Dietrich Klakow | David Ifeoluwa Adelani
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Slot-filling and intent detection are well-established tasks in Conversational AI. However, current large-scale benchmarks for these tasks often exclude evaluations of low-resource languages and rely on translations from English benchmarks, thereby predominantly reflecting Western-centric concepts. In this paper, we introduce “INJONGO” - a multicultural, open-source benchmark dataset for 16 African languages with utterances generated by native speakers across diverse domains, including banking, travel, home, and dining. Through extensive experiments, we benchmark fine-tuning multilingual transformer models and prompting large language models (LLMs), and show the advantage of leveraging African-cultural utterances over Western-centric utterances for improving cross-lingual transfer from the English language. Experimental results reveal that current LLMs struggle with the slot-filling task, with GPT-4o achieving an average performance of 26 F1. In contrast, intent detection performance is notably better, with an average accuracy of 70.6%, though it still falls short of fine-tuning baselines. When compared to the English language, GPT-4o and fine-tuning baselines perform similarly on intent detection, achieving an accuracy of approximately 81%. Our findings suggest that LLMs performance is still behind for many low-resource African languages, and more work is needed to further improve their downstream performance.

pdf bib
IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models
David Ifeoluwa Adelani | Jessica Ojo | Israel Abebe Azime | Jian Yun Zhuang | Jesujoba Oluwadara Alabi | Xuanli He | Millicent Ochieng | Sara Hooker | Andiswa Bukula | En-Shiun Annie Lee | Chiamaka Ijeoma Chukwuneke | Happy Buzaaba | Blessing Kudzaishe Sibanda | Godson Koffi Kalipe | Jonathan Mukiibi | Salomon Kabongo Kabenamualu | Foutse Yuehgoh | Mmasibidi Setaka | Lolwethu Ndolela | Nkiruka Odu | Rooweither Mabuya | Salomey Osei | Shamsuddeen Hassan Muhammad | Sokhar Samb | Tadesse Kebede Guge | Tombekai Vangoni Sherman | Pontus Stenetorp
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Despite the widespread adoption of Large language models (LLMs), their remarkable capabilities remain limited to a few high-resource languages. Additionally, many low-resource languages (e.g. African languages) are often evaluated only on basic text classification tasks due to the lack of appropriate or comprehensive benchmarks outside of high-resource languages. In this paper, we introduce IrokoBench—a human-translated benchmark dataset for 17 typologically-diverse low-resource African languages covering three tasks: natural language inference(AfriXNLI), mathematical reasoning(AfriMGSM), and multi-choice knowledge-based QA(AfriMMLU). We use IrokoBench to evaluate zero-shot, few-shot, and translate-test settings(where test sets are translated into English) across 10 open and four proprietary LLMs. Our evaluation reveals a significant performance gap between high-resource languages (such as English and French) and low-resource African languages. We observe a significant performance gap between open and proprietary models, with the highest performing open model, Gemma 2 27B only at 63% of the best-performing proprietary model GPT-4o performance. Machine translating the test set to English before evaluation helped to close the gap for larger models that are English-centric, like Gemma 2 27B and LLaMa 3.1 70B. These findings suggest that more efforts are needed to develop and adapt LLMs for African languages.