François Fleuret
Also published as: Francois Fleuret
2023
HyperMixer: An MLP-based Low Cost Alternative to Transformers
Florian Mai
|
Arnaud Pannatier
|
Fabio Fehr
|
Haolin Chen
|
Francois Marelli
|
Francois Fleuret
|
James Henderson
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Transformer-based architectures are the model of choice for natural language understanding, but they come at a significant cost, as they have quadratic complexity in the input length, require a lot of training data, and can be difficult to tune. In the pursuit of lower costs, we investigate simple MLP-based architectures. We find that existing architectures such as MLPMixer, which achieves token mixing through a static MLP applied to each feature independently, are too detached from the inductive biases required for natural language understanding. In this paper, we propose a simple variant, HyperMixer, which forms the token mixing MLP dynamically using hypernetworks. Empirically, we demonstrate that our model performs better than alternative MLP-based models, and on par with Transformers. In contrast to Transformers, HyperMixer achieves these results at substantially lower costs in terms of processing time, training data, and hyperparameter tuning.
2021
Language Models are Few-Shot Butlers
Vincent Micheli
|
Francois Fleuret
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Pretrained language models demonstrate strong performance in most NLP tasks when fine-tuned on small task-specific datasets. Hence, these autoregressive models constitute ideal agents to operate in text-based environments where language understanding and generative capabilities are essential. Nonetheless, collecting expert demonstrations in such environments is a time-consuming endeavour. We introduce a two-stage procedure to learn from a small set of demonstrations and further improve by interacting with an environment. We show that language models fine-tuned with only 1.2% of the expert demonstrations and a simple reinforcement learning algorithm achieve a 51% absolute improvement in success rate over existing methods in the ALFWorld environment.
2020
On the importance of pre-training data volume for compact language models
Vincent Micheli
|
Martin d’Hoffschmidt
|
François Fleuret
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Recent advances in language modeling have led to computationally intensive and resource-demanding state-of-the-art models. In an effort towards sustainable practices, we study the impact of pre-training data volume on compact language models. Multiple BERT-based models are trained on gradually increasing amounts of French text. Through fine-tuning on the French Question Answering Dataset (FQuAD), we observe that well-performing models are obtained with as little as 100 MB of text. In addition, we show that past critically low amounts of pre-training data, an intermediate pre-training step on the task-specific corpus does not yield substantial improvements.
Search
Co-authors
- Arnaud Pannatier 1
- Fabio Fehr 1
- Florian Mai 1
- Francois Marelli 1
- Haolin Chen 1
- show all...