Iacopo Poli
2025
Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference
Benjamin Warner
|
Antoine Chaffin
|
Benjamin Clavié
|
Orion Weller
|
Oskar Hallström
|
Said Taghadouini
|
Alexis Gallagher
|
Raja Biswas
|
Faisal Ladhak
|
Tom Aarsen
|
Griffin Thomas Adams
|
Jeremy Howard
|
Iacopo Poli
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Encoder-only transformer models such as BERT offer a great performance-size tradeoff for retrieval and classification tasks with respect to larger decoder-only models. Despite being the workhorse of numerous production pipelines, there have been limited Pareto improvements to BERT since its release. In this paper, we introduce ModernBERT, bringing modern model optimizations to encoder-only models and representing a major Pareto improvement over older encoders. Trained on 2 trillion tokens with a native 8192 sequence length, ModernBERT models exhibit state-of-the-art results on a large pool of evaluations encompassing diverse classification tasks and both single and multi-vector retrieval on different domains (including code). In addition to strong downstream performance, ModernBERT is also the most speed and memory efficient encoder and is designed for inference on common GPUs.
2022
PAGnol: An Extra-Large French Generative Model
Julien Launay
|
E.l. Tommasone
|
Baptiste Pannier
|
François Boniface
|
Amélie Chatelain
|
Alessandro Cappelli
|
Iacopo Poli
|
Djamé Seddah
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Access to large pre-trained models of varied architectures, in many different languages, is central to the democratization of NLP. We introduce PAGnol, a collection of French GPT models. Using scaling laws, we efficiently train PAGnol-XL (1.5B parameters) with the same computational budget as CamemBERT, a model 13 times smaller. PAGnol-XL is the largest model trained from scratch for the French language. We plan to train increasingly large and performing versions of PAGnol, exploring the capabilities of French extreme-scale models. For this first release, we focus on the pre-training and scaling calculations underlining PAGnol. We fit a scaling law for compute for the French language, and compare it with its English counterpart. We find the pre-training dataset significantly conditions the quality of the outputs, with common datasets such as OSCAR leading to low-quality offensive text. We evaluate our models on discriminative and generative tasks in French, comparing to other state-of-the-art French and multilingual models, and reaching the state of the art in the abstract summarization task. Our research was conducted on the public GENCI Jean Zay supercomputer, and our models up to the Large are made publicly available.
Search
Fix author
Co-authors
- Tom Aarsen 1
- Griffin Thomas Adams 1
- Raja Biswas 1
- François Boniface 1
- Alessandro Cappelli 1
- show all...