Hugo Laurençon
2023
The ROOTS Search Tool: Data Transparency for LLMs
Aleksandra Piktus
|
Christopher Akiki
|
Paulo Villegas
|
Hugo Laurençon
|
Gérard Dupont
|
Sasha Luccioni
|
Yacine Jernite
|
Anna Rogers
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
ROOTS is a 1.6TB multilingual text corpus developed for the training of BLOOM, currently the largest language model explicitly accompanied by commensurate data governance efforts. In continuation of these efforts, we present the ROOTS Search Tool: a search engine over the entire ROOTS corpus offering both fuzzy and exact search capabilities. ROOTS is the largest corpus to date that can be investigated this way. The ROOTS Search Tool is open-sourced and available on Hugging Face Spaces: https://huggingface.co/spaces/bigscience-data/roots-search. We describe our implementation and the possible use cases of our tool.
2022
DP-Parse: Finding Word Boundaries from Raw Speech with an Instance Lexicon
Robin Algayres
|
Tristan Ricoul
|
Julien Karadayi
|
Hugo Laurençon
|
Salah Zaiem
|
Abdelrahman Mohamed
|
Benoît Sagot
|
Emmanuel Dupoux
Transactions of the Association for Computational Linguistics, Volume 10
Finding word boundaries in continuous speech is challenging as there is little or no equivalent of a ‘space’ delimiter between words. Popular Bayesian non-parametric models for text segmentation (Goldwater et al., 2006, 2009) use a Dirichlet process to jointly segment sentences and build a lexicon of word types. We introduce DP-Parse, which uses similar principles but only relies on an instance lexicon of word tokens, avoiding the clustering errors that arise with a lexicon of word types. On the Zero Resource Speech Benchmark 2017, our model sets a new speech segmentation state-of-the-art in 5 languages. The algorithm monotonically improves with better input representations, achieving yet higher scores when fed with weakly supervised inputs. Despite lacking a type lexicon, DP-Parse can be pipelined to a language model and learn semantic and syntactic representations as assessed by a new spoken word embedding benchmark. 1