2025
pdf
bib
abs
Y-NQ: English-Yorùbá Evaluation dataset for Open-Book Reading Comprehension with Open-Ended Questions
Marta R. Costa-jussà
|
Joy Chen
|
Ife Adebara
|
Joe Chuang
|
Christophe Ropers
|
Eduardo Sánchez
Proceedings of the Sixth Workshop on African Natural Language Processing (AfricaNLP 2025)
The purpose of this work is to share an English-Yorùbá evaluation dataset for openbook reading comprehension with open-ended questions to assess the performance of models both in a high- and a low-resource language. The dataset contains 358 questions and answers on 338 English documents and 208 Yorùbá documents. Experiments show a consistent disparity in performance between the two languages, with Yorùbá falling behind English for automatic metrics even if documents are much shorter for this language. For a small set of documents with comparable length, performance of Yorùbá drops by 2.5 times and this comparison is validated with humanevaluation. When analyzing performance by length, we observe that Yorùbá decreases performance dramatically for documents that reach 1500 words while English performance is barely affected at that length. Our dataset opens the door to showcasing if English LLM reading comprehension capabilities extend to Yorùbá, which for the evaluated LLMs is not the case.
pdf
bib
abs
LCFO: Long Context and Long Form Output Dataset and Benchmarking
Marta R. Costa-jussà
|
Pierre Andrews
|
Mariano Coria Meglioli
|
Joy Chen
|
Joe Chuang
|
David Dale
|
Christophe Ropers
|
Alexandre Mourachko
|
Eduardo Sánchez
|
Holger Schwenk
|
Tuan A. Tran
|
Arina Turkatenko
|
Carleigh Wood
Findings of the Association for Computational Linguistics: ACL 2025
This paper presents the Long Context and Form Output (LCFO) benchmark, a novel evaluation framework for assessing gradual summarization and summary expansion capabilities across diverse domains. LCFO consists of long input documents (5k words average length), each of which comes with three summaries of different lengths (20%, 10%, and 5% of the input text), as well as approximately 15 questions and answers (QA) related to the input content. Notably, LCFO also provides alignments between specific QA pairs and corresponding summaries in 7 domains. The primary motivation behind providing summaries of different lengths is to establish a controllable framework for generating long texts from shorter inputs, i.e. summary expansion. To establish an evaluation metric framework for summarization and summary expansion, we provide human evaluation scores for human-generated outputs, as well as results from various state-of-the-art large language models (LLMs). GPT-4o-mini achieves best human scores among automatic systems in both summarization and summary expansion tasks (≈ +10% and +20%, respectively). It even surpasses human output quality in the case of short summaries (≈ +7%). Overall automatic metrics achieve low correlations with human evaluation scores (≈ 0.4) but moderate correlation on specific evaluation aspects such as fluency and attribution (≈ 0.6).
pdf
bib
abs
2M-BELEBELE: Highly Multilingual Speech and American Sign Language Comprehension Dataset Download PDF
Marta R. Costa-jussà
|
Bokai Yu
|
Pierre Andrews
|
Belen Alastruey
|
Necati Cihan Camgoz
|
Joe Chuang
|
Jean Maillard
|
Christophe Ropers
|
Arina Turkatenko
|
Carleigh Wood
Findings of the Association for Computational Linguistics: ACL 2025
We introduce the first highly multilingual speech and American Sign Language (ASL) comprehension dataset by extending BELEBELE. Our dataset covers 91 spoken languages at the intersection of BELEBELE and FLEURS, and one sign language (ASL). As a by-product we also extend the Automatic Speech Recognition Benchmark, FLEURS, by 20%. We evaluate 2M-BELEBELE dataset for both 5-shot and zero-shot settings and across languages, the speech comprehension accuracy is ≈ 10% average lower compared to reading comprehension.
pdf
bib
abs
Towards Massive Multilingual Holistic Bias
Xiaoqing Tan
|
Prangthip Hansanti
|
Arina Turkatenko
|
Joe Chuang
|
Carleigh Wood
|
Bokai Yu
|
Christophe Ropers
|
Marta R. Costa-jussà
Proceedings of the 6th Workshop on Gender Bias in Natural Language Processing (GeBNLP)
In the current landscape of automatic language generation, there is a need to understand, evaluate, and mitigate demographic biases, as existing models are becoming increasingly multilingual. To address this, we present the initial eight languages from the Massive Multilingual Holistic Bias (MMHB) dataset and benchmark consisting of approximately 6 million sentences. The sentences are designed to induce biases towards different groups of people which can yield significant results when using them as a benchmark to test different text generation models. To further scale up in terms of both language coverage and size and to leverage limited human translation, we use systematic approach to independently translate sentence parts. This technique carefully designs a structure to dynamically generate multiple sentence variations and significantly reduces the human translation workload. The translation process has been meticulously conducted to avoid an English-centric perspective and include all necessary morphological variations for languages that require them, improving from the original English HOLISTICBIAS. Finally, we utilize MMHB to report results on gender bias and added toxicity in MT tasks.
pdf
bib
abs
On the Role of Speech Data in Reducing Toxicity Detection Bias
Samuel Bell
|
Mariano Coria Meglioli
|
Megan Richards
|
Eduardo Sánchez
|
Christophe Ropers
|
Skyler Wang
|
Adina Williams
|
Levent Sagun
|
Marta R. Costa-jussà
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Text toxicity detection systems exhibit significant biases, producing disproportionate rates of false positives on samples mentioning demographic groups. But what about toxicity detection in speech? To investigate the extent to which text-based biases are mitigated by speech-based systems, we produce a set of high-quality group annotations for the multilingual MuTOX dataset, and then leverage these annotations to systematically compare speech- and text-based toxicity classifiers. Our findings indicate that access to speech data during inference supports reduced bias against group mentions, particularly for ambiguous and disagreement-inducing samples. Our results also suggest that improving classifiers, rather than transcription pipelines, is more helpful for reducing group bias. We publicly release our annotations and provide recommendations for future toxicity dataset construction.
pdf
bib
abs
SpiRit-LM: Interleaved Spoken and Written Language Model
Tu Anh Nguyen
|
Benjamin Muller
|
Bokai Yu
|
Marta R. Costa-jussa
|
Maha Elbayad
|
Sravya Popuri
|
Christophe Ropers
|
Paul-Ambroise Duquenne
|
Robin Algayres
|
Ruslan Mavlyutov
|
Itai Gat
|
Mary Williamson
|
Gabriel Synnaeve
|
Juan Pino
|
Benoît Sagot
|
Emmanuel Dupoux
Transactions of the Association for Computational Linguistics, Volume 13
We introduce SpiRit-LM, a foundation multimodal language model that freely mixes text and speech. Our model is based on a 7B pretrained text language model that we extend to the speech modality by continuously training it on text and speech units. Speech and text sequences are concatenated as a single stream of tokens, and trained with a word-level interleaving method using a small automatically curated speech-text parallel corpus. SpiRit-LM comes in two versions: a Base version that uses speech phonetic units (HuBERT) and an Expressive version that models expressivity using pitch and style units in addition to the phonetic units. For both versions, the text is encoded with subword BPE tokens. The resulting model displays both the semantic abilities of text models and the expressive abilities of speech models. Additionally, we demonstrate that SpiRit-LM can learn new tasks in a few-shot fashion across modalities (i.e., ASR, TTS, Speech Classification). We make available model weights and inference code.1,2
2024
pdf
bib
abs
MuTox: Universal MUltilingual Audio-based TOXicity Dataset and Zero-shot Detector
Marta Costa-jussà
|
Mariano Meglioli
|
Pierre Andrews
|
David Dale
|
Prangthip Hansanti
|
Elahe Kalbassi
|
Alexandre Mourachko
|
Christophe Ropers
|
Carleigh Wood
Findings of the Association for Computational Linguistics: ACL 2024
Research in toxicity detection in natural language processing for the speech modality (audio-based) is quite limited, particularly for languages other than English. To address these limitations and lay the groundwork for truly multilingual audio-based toxicity detection, we introduce MuTox, the first highly multilingual audio-based dataset with toxicity labels which covers 14 different linguistic families. The dataset comprises 20,000 audio utterances for English and Spanish, and 4,000 for the other 28 languages. To demonstrate the quality of this dataset, we trained the MuTox audio-based toxicity classifier, which enables zero-shot toxicity detection across a wide range of languages. This classifier performs on par with existing text-based trainable classifiers, while expanding the language coverage more than tenfold. When compared to a wordlist-based classifier that covers a similar number of languages, MuTox improves F1-Score by an average of 100%. This significant improvement underscores the potential of MuTox in advancing the field of audio-based toxicity detection.
pdf
bib
abs
Speech Data from Radio Broadcasts for Low Resource Languages
Bismarck Bamfo Odoom
|
Leibny Paola Garcia Perera
|
Prangthip Hansanti
|
Loic Barrault
|
Christophe Ropers
|
Matthew Wiesner
|
Kenton Murray
|
Alexandre Mourachko
|
Philipp Koehn
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
We created a collection of speech data for 48 low resource languages. The corpus is extracted from radio broadcasts and processed with novel speech detection and language identification models based on a manually vetted subset of the audio for 10 languages. The data is made publicly available.
2023
pdf
bib
abs
HalOmi: A Manually Annotated Benchmark for Multilingual Hallucination and Omission Detection in Machine Translation
David Dale
|
Elena Voita
|
Janice Lam
|
Prangthip Hansanti
|
Christophe Ropers
|
Elahe Kalbassi
|
Cynthia Gao
|
Loic Barrault
|
Marta Costa-jussà
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Hallucinations in machine translation are translations that contain information completely unrelated to the input. Omissions are translations that do not include some of the input information. While both cases tend to be catastrophic errors undermining user trust, annotated data with these types of pathologies is extremely scarce and is limited to a few high-resource languages. In this work, we release an annotated dataset for the hallucination and omission phenomena covering 18 translation directions with varying resource levels and scripts. Our annotation covers different levels of partial and full hallucinations as well as omissions both at the sentence and at the word level. Additionally, we revisit previous methods for hallucination and omission detection, show that conclusions made based on a single language pair largely do not hold for a large-scale evaluation, and establish new solid baselines.
pdf
bib
abs
Multilingual Holistic Bias: Extending Descriptors and Patterns to Unveil Demographic Biases in Languages at Scale
Marta Costa-jussà
|
Pierre Andrews
|
Eric Smith
|
Prangthip Hansanti
|
Christophe Ropers
|
Elahe Kalbassi
|
Cynthia Gao
|
Daniel Licht
|
Carleigh Wood
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
We introduce a multilingual extension of the HolisticBias dataset, the largest English template-based taxonomy of textual people references: Multilingual HolisticBias. This extension consists of 20,459 sentences in 50 languages distributed across 13 demographic axes. Source sentences are built from combinations of 118 demographic descriptors and three patterns, excluding nonsensical combinations. Multilingual translations include alternatives for gendered languages that cover gendered translations when there is ambiguity in English. Our dataset is intended to uncover demographic imbalances and be the tool to quantify mitigations towards them. Our initial findings show that translation quality for EN-to-XX translations is an average of almost 8 spBLEU better when evaluating with the masculine human reference compared to feminine. In the opposite direction, XX-to-EN, we compare the robustness of the model when the source input only differs in gender (masculine or feminine) and masculine translations are an average of almost 4 spBLEU better than feminine. When embedding sentences to a joint multilingual sentence representations space, we find that for most languages masculine translations are significantly closer to the English neutral sentences when embedded.
pdf
bib
abs
Toxicity in Multilingual Machine Translation at Scale
Marta Costa-jussà
|
Eric Smith
|
Christophe Ropers
|
Daniel Licht
|
Jean Maillard
|
Javier Ferrando
|
Carlos Escolano
Findings of the Association for Computational Linguistics: EMNLP 2023
Machine Translation systems can produce different types of errors, some of which are characterized as critical or catastrophic due to the specific negative impact that they can have on users. In this paper we focus on one type of critical error: added toxicity. We evaluate and analyze added toxicity when translating a large evaluation dataset (HOLISTICBIAS, over 472k sentences, covering 13 demographic axes) from English into 164 languages. An automatic toxicity evaluation shows that added toxicity across languages varies from 0% to 5%. The output languages with the most added toxicity tend to be low-resource ones, and the demographic axes with the most added toxicity include sexual orientation, gender and sex, and ability. We also perform human evaluation on a subset of 8 translation directions, confirming the prevalence of true added toxicity. We use a measurement of the amount of source contribution to the translation, where a low source contribution implies hallucination, to interpret what causes toxicity. Making use of the input attributions allows us to explain toxicity, because the source contributions significantly correlate with toxicity for 84% of languages studied. Given our findings, our recommendations to reduce added toxicity are to curate training data to avoid mistranslations, mitigate hallucination and check unstable translations.
pdf
bib
abs
The Gender-GAP Pipeline: A Gender-Aware Polyglot Pipeline for Gender Characterisation in 55 Languages
Benjamin Muller
|
Belen Alastruey
|
Prangthip Hansanti
|
Elahe Kalbassi
|
Christophe Ropers
|
Eric Smith
|
Adina Williams
|
Luke Zettlemoyer
|
Pierre Andrews
|
Marta R. Costa-jussà
Proceedings of the Eighth Conference on Machine Translation
Gender biases in language generation systems are challenging to mitigate. One possible source for these biases is gender representation disparities in the training and evaluation data. Despite recent progress in documenting this problem and many attempts at mitigating it, we still lack shared methodology and tooling to report gender representation in large datasets. Such quantitative reporting will enable further mitigation, e.g., via data augmentation. This paper describes the Gender-Gap Pipeline (for Gender-Aware Polyglot Pipeline), an automatic pipeline to characterize gender representation in large-scale datasets for 55 languages. The pipeline uses a multilingual lexicon of gendered person-nouns to quantify the gender representation in text. We showcase it to report gender representation in WMT training data and development data for the News task, confirming that current data is skewed towards masculine representation. Having unbalanced datasets may indirectly optimize our systems towards outperforming one gender over the others. We suggest introducing our gender quantification pipeline in current datasets and, ideally, modifying them toward a balanced representation.