Joe Chuang
2025
Y-NQ: English-Yorùbá Evaluation dataset for Open-Book Reading Comprehension with Open-Ended Questions
Marta R. Costa-jussà
|
Joy Chen
|
Ife Adebara
|
Joe Chuang
|
Christophe Ropers
|
Eduardo Sánchez
Proceedings of the Sixth Workshop on African Natural Language Processing (AfricaNLP 2025)
The purpose of this work is to share an English-Yorùbá evaluation dataset for openbook reading comprehension with open-ended questions to assess the performance of models both in a high- and a low-resource language. The dataset contains 358 questions and answers on 338 English documents and 208 Yorùbá documents. Experiments show a consistent disparity in performance between the two languages, with Yorùbá falling behind English for automatic metrics even if documents are much shorter for this language. For a small set of documents with comparable length, performance of Yorùbá drops by 2.5 times and this comparison is validated with humanevaluation. When analyzing performance by length, we observe that Yorùbá decreases performance dramatically for documents that reach 1500 words while English performance is barely affected at that length. Our dataset opens the door to showcasing if English LLM reading comprehension capabilities extend to Yorùbá, which for the evaluated LLMs is not the case.
Towards Massive Multilingual Holistic Bias
Xiaoqing Tan
|
Prangthip Hansanti
|
Arina Turkatenko
|
Joe Chuang
|
Carleigh Wood
|
Bokai Yu
|
Christophe Ropers
|
Marta R. Costa-jussà
Proceedings of the 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
In the current landscape of automatic language generation, there is a need to understand, evaluate, and mitigate demographic biases, as existing models are becoming increasingly multilingual. To address this, we present the initial eight languages from the Massive Multilingual Holistic Bias (MMHB) dataset and benchmark consisting of approximately 6 million sentences. The sentences are designed to induce biases towards different groups of people which can yield significant results when using them as a benchmark to test different text generation models. To further scale up in terms of both language coverage and size and to leverage limited human translation, we use systematic approach to independently translate sentence parts. This technique carefully designs a structure to dynamically generate multiple sentence variations and significantly reduces the human translation workload. The translation process has been meticulously conducted to avoid an English-centric perspective and include all necessary morphological variations for languages that require them, improving from the original English HOLISTICBIAS. Finally, we utilize MMHB to report results on gender bias and added toxicity in MT tasks.
Search
Fix author
Co-authors
- Marta R. Costa-jussà 2
- Christophe Ropers 2
- Ife Adebara 1
- Joy Chen 1
- Prangthip Hansanti 1
- show all...