Mariano Coria Meglioli


2025

pdf bib
BOUQuET : dataset, Benchmark and Open initiative for Universal Quality Evaluation in Translation
Pierre Andrews | Mikel Artetxe | Mariano Coria Meglioli | Marta R. Costa-jussà | Joe Chuang | David Dale | Mark Duppenthaler | Nathanial Paul Ekberg | Cynthia Gao | Daniel Edward Licht | Jean Maillard | Alexandre Mourachko | Christophe Ropers | Safiyyah Saleem | Eduardo Sánchez | Ioannis Tsiamas | Arina Turkatenko | Albert Ventayol-Boada | Shireen Yates
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

BOUQuET is a multi-way, multicentric and multi-register/domain dataset and benchmark, and a broader collaborative initiative. This dataset is handcrafted in 8 non-English languages (i.e. Egyptian Arabic and Modern Standard Arabic, French, German, Hindi, Indonesian, Mandarin Chinese, Russian, and Spanish). Each of these source languages are representative of the most widely spoken ones and therefore they have the potential to serve as pivot languages that will enable more accurate translations. The dataset is multicentric to enforce representation of multilingual language features. In addition, the dataset goes beyond the sentence level, as it is organized in paragraphs of various lengths. Compared with related machine translation datasets, we show that BOUQuET has a broader representation of domains while simplifying the translation task for non-experts. Therefore, BOUQuET is specially suitable for crowd-source extension for which we are launching a call aim-ing at collecting a multi-way parallel corpus covering any written language. The dataset is freely available at https://huggingface.co/datasets/facebook/bouquet.

pdf bib
LCFO: Long Context and Long Form Output Dataset and Benchmarking
Marta R. Costa-jussà | Pierre Andrews | Mariano Coria Meglioli | Joy Chen | Joe Chuang | David Dale | Christophe Ropers | Alexandre Mourachko | Eduardo Sánchez | Holger Schwenk | Tuan A. Tran | Arina Turkatenko | Carleigh Wood
Findings of the Association for Computational Linguistics: ACL 2025

This paper presents the Long Context and Form Output (LCFO) benchmark, a novel evaluation framework for assessing gradual summarization and summary expansion capabilities across diverse domains. LCFO consists of long input documents (5k words average length), each of which comes with three summaries of different lengths (20%, 10%, and 5% of the input text), as well as approximately 15 questions and answers (QA) related to the input content. Notably, LCFO also provides alignments between specific QA pairs and corresponding summaries in 7 domains. The primary motivation behind providing summaries of different lengths is to establish a controllable framework for generating long texts from shorter inputs, i.e. summary expansion. To establish an evaluation metric framework for summarization and summary expansion, we provide human evaluation scores for human-generated outputs, as well as results from various state-of-the-art large language models (LLMs). GPT-4o-mini achieves best human scores among automatic systems in both summarization and summary expansion tasks (≈ +10% and +20%, respectively). It even surpasses human output quality in the case of short summaries (≈ +7%). Overall automatic metrics achieve low correlations with human evaluation scores (≈ 0.4) but moderate correlation on specific evaluation aspects such as fluency and attribution (≈ 0.6).

pdf bib
On the Role of Speech Data in Reducing Toxicity Detection Bias
Samuel Bell | Mariano Coria Meglioli | Megan Richards | Eduardo Sánchez | Christophe Ropers | Skyler Wang | Adina Williams | Levent Sagun | Marta R. Costa-jussà
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Text toxicity detection systems exhibit significant biases, producing disproportionate rates of false positives on samples mentioning demographic groups. But what about toxicity detection in speech? To investigate the extent to which text-based biases are mitigated by speech-based systems, we produce a set of high-quality group annotations for the multilingual MuTOX dataset, and then leverage these annotations to systematically compare speech- and text-based toxicity classifiers. Our findings indicate that access to speech data during inference supports reduced bias against group mentions, particularly for ambiguous and disagreement-inducing samples. Our results also suggest that improving classifiers, rather than transcription pipelines, is more helpful for reducing group bias. We publicly release our annotations and provide recommendations for future toxicity dataset construction.