Thomas Wolf


2023

pdf
FinGPT: Large Generative Models for a Small Language
Risto Luukkonen | Ville Komulainen | Jouni Luoma | Anni Eskelinen | Jenna Kanerva | Hanna-Mari Kupari | Filip Ginter | Veronika Laippala | Niklas Muennighoff | Aleksandra Piktus | Thomas Wang | Nouamane Tazi | Teven Scao | Thomas Wolf | Osma Suominen | Samuli Sairanen | Mikko Merioksa | Jyrki Heinonen | Aija Vahtola | Samuel Antao | Sampo Pyysalo
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Large language models (LLMs) excel in many tasks in NLP and beyond, but most open models have very limited coverage of smaller languages and LLM work tends to focus on languages where nearly unlimited data is available for pretraining. In this work, we study the challenges of creating LLMs for Finnish, a language spoken by less than 0.1% of the world population. We compile an extensive dataset of Finnish combining web crawls, news, social media and eBooks. We pursue two approaches to pretrain models: 1) we train seven monolingual models from scratch (186M to 13B parameters) dubbed FinGPT, 2) we continue the pretraining of the multilingual BLOOM model on a mix of its original training data and Finnish, resulting in a 176 billion parameter model we call BLUUMI. For model evaluation, we introduce FIN-bench, a version of BIG-bench with Finnish tasks. We also assess other model qualities such as toxicity and bias. Our models and tools are openly available at https://turkunlp.org/gpt3-finnish.

2022

pdf bib
Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models
Angela Fan | Suzana Ilic | Thomas Wolf | Matthias Gallé
Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models

2021

pdf bib
Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing
Nafise Sadat Moosavi | Iryna Gurevych | Angela Fan | Thomas Wolf | Yufang Hou | Ana Marasović | Sujith Ravi
Proceedings of the Second Workshop on Simple and Efficient Natural Language Processing

pdf
Datasets: A Community Library for Natural Language Processing
Quentin Lhoest | Albert Villanova del Moral | Yacine Jernite | Abhishek Thakur | Patrick von Platen | Suraj Patil | Julien Chaumond | Mariama Drame | Julien Plu | Lewis Tunstall | Joe Davison | Mario Šaško | Gunjan Chhablani | Bhavitvya Malik | Simon Brandeis | Teven Le Scao | Victor Sanh | Canwen Xu | Nicolas Patry | Angelina McMillan-Major | Philipp Schmid | Sylvain Gugger | Clément Delangue | Théo Matussière | Lysandre Debut | Stas Bekman | Pierric Cistac | Thibault Goehringer | Victor Mustar | François Lagunas | Alexander Rush | Thomas Wolf
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

The scale, variety, and quantity of publicly-available NLP datasets has grown rapidly as researchers propose new tasks, larger models, and novel benchmarks. Datasets is a community library for contemporary NLP designed to support this ecosystem. Datasets aims to standardize end-user interfaces, versioning, and documentation, while providing a lightweight front-end that behaves similarly for small datasets as for internet-scale corpora. The design of the library incorporates a distributed, community-driven approach to adding datasets and documenting usage. After a year of development, the library now includes more than 650 unique datasets, has more than 250 contributors, and has helped support a variety of novel cross-dataset research projects and shared tasks. The library is available at https://github.com/huggingface/datasets.

2020

pdf
Transformers: State-of-the-Art Natural Language Processing
Thomas Wolf | Lysandre Debut | Victor Sanh | Julien Chaumond | Clement Delangue | Anthony Moi | Pierric Cistac | Tim Rault | Remi Louf | Morgan Funtowicz | Joe Davison | Sam Shleifer | Patrick von Platen | Clara Ma | Yacine Jernite | Julien Plu | Canwen Xu | Teven Le Scao | Sylvain Gugger | Mariama Drame | Quentin Lhoest | Alexander Rush
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

Recent progress in natural language processing has been driven by advances in both model architecture and model pretraining. Transformer architectures have facilitated building higher-capacity models and pretraining has made it possible to effectively utilize this capacity for a wide variety of tasks. Transformers is an open-source library with the goal of opening up these advances to the wider machine learning community. The library consists of carefully engineered state-of-the art Transformer architectures under a unified API. Backing this library is a curated collection of pretrained models made by and available for the community. Transformers is designed to be extensible by researchers, simple for practitioners, and fast and robust in industrial deployments. The library is available at https://github.com/huggingface/transformers.

pdf
The Amazing World of Neural Language Generation
Yangfeng Ji | Antoine Bosselut | Thomas Wolf | Asli Celikyilmaz
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts

Neural Language Generation (NLG) – using neural network models to generate coherent text – is among the most promising methods for automated text creation. Recent years have seen a paradigm shift in neural text generation, caused by the advances in deep contextual language modeling (e.g., LSTMs, GPT, GPT2) and transfer learning (e.g., ELMo, BERT). While these tools have dramatically improved the state of NLG, particularly for low resources tasks, state-of-the-art NLG models still face many challenges: a lack of diversity in generated text, commonsense violations in depicted situations, difficulties in making use of factual information, and difficulties in designing reliable evaluation metrics. In this tutorial, we will present an overview of the current state-of-the-art in neural network architectures, and how they shaped recent research directions in text generation. We will discuss how and why these models succeed/fail at generating coherent text, and provide insights on several applications.

pdf bib
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing
Nafise Sadat Moosavi | Angela Fan | Vered Shwartz | Goran Glavaš | Shafiq Joty | Alex Wang | Thomas Wolf
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing

pdf
Overview of the SustaiNLP 2020 Shared Task
Alex Wang | Thomas Wolf
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing

We describe the SustaiNLP 2020 shared task: efficient inference on the SuperGLUE benchmark (Wang et al., 2019). Participants are evaluated based on performance on the benchmark as well as energy consumed in making predictions on the test sets. We describe the task, its organization, and the submitted systems. Across the six submissions to the shared task, participants achieved efficiency gains of 20× over a standard BERT (Devlin et al., 2019) baseline, while losing less than an absolute point in performance.

2019

pdf bib
Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation
Antoine Bosselut | Asli Celikyilmaz | Marjan Ghazvininejad | Srinivasan Iyer | Urvashi Khandelwal | Hannah Rashkin | Thomas Wolf
Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation

pdf
Transfer Learning in Natural Language Processing
Sebastian Ruder | Matthew E. Peters | Swabha Swayamdipta | Thomas Wolf
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorials

The classic supervised machine learning paradigm is based on learning in isolation, a single predictive model for a task using a single dataset. This approach requires a large number of training examples and performs best for well-defined and narrow tasks. Transfer learning refers to a set of methods that extend this approach by leveraging data from additional domains or tasks to train a model with better generalization properties. Over the last two years, the field of Natural Language Processing (NLP) has witnessed the emergence of several transfer learning methods and architectures which significantly improved upon the state-of-the-art on a wide range of NLP tasks. These improvements together with the wide availability and ease of integration of these methods are reminiscent of the factors that led to the success of pretrained word embeddings and ImageNet pretraining in computer vision, and indicate that these methods will likely become a common tool in the NLP landscape as well as an important research direction. We will present an overview of modern transfer learning methods in NLP, how models are pre-trained, what information the representations they learn capture, and review examples and case studies on how these models can be integrated and adapted in downstream NLP tasks.

pdf
Large-Scale Transfer Learning for Natural Language Generation
Sergey Golovanov | Rauf Kurbanov | Sergey Nikolenko | Kyryl Truskovskyi | Alexander Tselousov | Thomas Wolf
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Large-scale pretrained language models define state of the art in natural language processing, achieving outstanding performance on a variety of tasks. We study how these architectures can be applied and adapted for natural language generation, comparing a number of architectural and training schemes. We focus in particular on open-domain dialog as a typical high entropy generation task, presenting and comparing different architectures for adapting pretrained models with state of the art results.

2018

pdf bib
Continuous Learning in a Hierarchical Multiscale Neural Network
Thomas Wolf | Julien Chaumond | Clement Delangue
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

We reformulate the problem of encoding a multi-scale representation of a sequence in a language model by casting it in a continuous learning framework. We propose a hierarchical multi-scale language model in which short time-scale dependencies are encoded in the hidden state of a lower-level recurrent neural network while longer time-scale dependencies are encoded in the dynamic of the lower-level network by having a meta-learner update the weights of the lower-level neural network in an online meta-learning fashion. We use elastic weights consolidation as a higher-level to prevent catastrophic forgetting in our continuous learning framework.