2024
pdf
abs
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Luca Soldaini
|
Rodney Kinney
|
Akshita Bhagia
|
Dustin Schwenk
|
David Atkinson
|
Russell Authur
|
Ben Bogin
|
Khyathi Chandu
|
Jennifer Dumas
|
Yanai Elazar
|
Valentin Hofmann
|
Ananya Jha
|
Sachin Kumar
|
Li Lucy
|
Xinxi Lyu
|
Nathan Lambert
|
Ian Magnusson
|
Jacob Morrison
|
Niklas Muennighoff
|
Aakanksha Naik
|
Crystal Nam
|
Matthew Peters
|
Abhilasha Ravichander
|
Kyle Richardson
|
Zejiang Shen
|
Emma Strubell
|
Nishant Subramani
|
Oyvind Tafjord
|
Evan Walsh
|
Luke Zettlemoyer
|
Noah Smith
|
Hannaneh Hajishirzi
|
Iz Beltagy
|
Dirk Groeneveld
|
Jesse Dodge
|
Kyle Lo
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Information about pretraining corpora used to train the current best-performing language models is seldom discussed: commercial models rarely detail their data, and even open models are often released without accompanying training data or recipes to reproduce them. As a result, it is challenging to conduct and advance scientific research on language modeling, such as understanding how training data impacts model capabilities and limitations. To facilitate scientific research on language model pretraining, we curate and release Dolma, a three-trillion-token English corpus, built from a diverse mixture of web content, scientific papers, code, public-domain books, social media, and encyclopedic materials. We extensively document Dolma, including its design principles, details about its construction, and a summary of its contents. We present analyses and experimental results on intermediate states of Dolma to share what we have learned about important data curation practices. Finally, we open-source our data curation toolkit to enable reproduction of our work as well as support further research in large-scale data curation.
pdf
abs
OLMo: Accelerating the Science of Language Models
Dirk Groeneveld
|
Iz Beltagy
|
Evan Walsh
|
Akshita Bhagia
|
Rodney Kinney
|
Oyvind Tafjord
|
Ananya Jha
|
Hamish Ivison
|
Ian Magnusson
|
Yizhong Wang
|
Shane Arora
|
David Atkinson
|
Russell Authur
|
Khyathi Chandu
|
Arman Cohan
|
Jennifer Dumas
|
Yanai Elazar
|
Yuling Gu
|
Jack Hessel
|
Tushar Khot
|
William Merrill
|
Jacob Morrison
|
Niklas Muennighoff
|
Aakanksha Naik
|
Crystal Nam
|
Matthew Peters
|
Valentina Pyatkin
|
Abhilasha Ravichander
|
Dustin Schwenk
|
Saurabh Shah
|
William Smith
|
Emma Strubell
|
Nishant Subramani
|
Mitchell Wortsman
|
Pradeep Dasigi
|
Nathan Lambert
|
Kyle Richardson
|
Luke Zettlemoyer
|
Jesse Dodge
|
Kyle Lo
|
Luca Soldaini
|
Noah Smith
|
Hannaneh Hajishirzi
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Language models (LMs) have become ubiquitous in both NLP research and in commercial product offerings. As their commercial importance has surged, the most powerful models have become closed off, gated behind proprietary interfaces, with important details of their training data, architectures, and development undisclosed. Given the importance of these details in scientifically studying these models, including their biases and potential risks, we believe it is essential for the research community to have access to powerful, truly open LMs. To this end, we have built OLMo, a competitive, truly Open Language Model, to enable the scientific study of language models. Unlike most prior efforts that have only released model weights and inference code, we release OLMo alongside open training data and training and evaluation code. We hope this release will empower the open research community and inspire a new wave of innovation.
2023
pdf
abs
HINT: Hypernetwork Instruction Tuning for Efficient Zero- and Few-Shot Generalisation
Hamish Ivison
|
Akshita Bhagia
|
Yizhong Wang
|
Hannaneh Hajishirzi
|
Matthew Peters
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent NLP models have shown the remarkable ability to effectively generalise ‘zero-shot’ to new tasks using only natural language instructions as guidance. However, many of these approaches suffer from high computational costs due to their reliance on concatenating lengthy instructions with every input example, resulting in costly reprocessing of the instruction. To avoid this, we introduce Hypernetworks for INstruction Tuning (HINT), which convert task instructions and examples into parameter-efficient modules inserted into an underlying model using a pretrained text encoder, eliminating the need to include instructions in the model input. The hypernetwork in HINT also produces an encoded instruction, which we concatenate with encoded inputs during decoding to further improve performance. HINT models outperform strong state-of-the-art baselines by over 10% when controlling for compute (measured in FLOPs). By converting instructions into modules, HINT models can effectively disregard the length of instructions and few-shot example inputs in terms of compute usage. As a result, HINT can enhance its performance by up to 25% by incorporating additional few-shot data, while utilizing only up to 5% more compute. This combines the strengths of parameter-efficient fine-tuning and in-context learning.
2022
pdf
abs
Continued Pretraining for Better Zero- and Few-Shot Promptability
Zhaofeng Wu
|
Robert L Logan IV
|
Pete Walsh
|
Akshita Bhagia
|
Dirk Groeneveld
|
Sameer Singh
|
Iz Beltagy
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Recently introduced language model prompting methods can achieve high accuracy in zero- and few-shot settings while requiring few to no learned task-specific parameters. Nevertheless, these methods still often trail behind full model finetuning. In this work, we investigate if a dedicated continued pretraining stage could improve “promptability”, i.e., zero-shot performance with natural language prompts or few-shot performance with prompt tuning. We reveal settings where existing continued pretraining methods lack promptability. We also identify current methodological gaps, which we fill with thorough large-scale experiments. We demonstrate that a simple recipe, continued pretraining that incorporates a trainable prompt during multi-task learning, leads to improved promptability in both zero- and few-shot settings compared to existing methods, up to 31% relative. On the other hand, we find that continued pretraining using MAML-style meta-learning, a method that directly optimizes few-shot promptability, yields subpar performance. We validate our findings with two prompt tuning methods, and, based on our results, we provide concrete recommendations to optimize promptability for different use cases.
pdf
abs
On Advances in Text Generation from Images Beyond Captioning: A Case Study in Self-Rationalization
Shruti Palaskar
|
Akshita Bhagia
|
Yonatan Bisk
|
Florian Metze
|
Alan W Black
|
Ana Marasovic
Findings of the Association for Computational Linguistics: EMNLP 2022
Combining the visual modality with pretrained language models has been surprisingly effective for simple descriptive tasks such as image captioning. More general text generation however remains elusive. We take a step back and ask: How do these models work for more complex generative tasks, i.e. conditioning on both text and images? Are multimodal models simply visually adapted language models, or do they combine they reason jointly over modalities?We investigate these questions in the context of self-rationalization (jointly generating task labels/answers and free-text explanations) of three tasks: (i) visual question answering in VQA-X, (ii) visual commonsense reasoning in VCR, and (iii) visual-textual entailment in E-SNLI-VE. We show that recent unimodal advances, CLIP image representations and scaling of language models, do not consistently improveself-rationalization in multimodal tasks. We find that no single model type works universally best across tasks, datasets, and finetuning data sizes. Our findings motivate the need for novel general backbones that move text generation from images and text beyond image captioning.
pdf
abs
Findings of the WMT’22 Shared Task on Large-Scale Machine Translation Evaluation for African Languages
David Adelani
|
Md Mahfuz Ibn Alam
|
Antonios Anastasopoulos
|
Akshita Bhagia
|
Marta R. Costa-jussà
|
Jesse Dodge
|
Fahim Faisal
|
Christian Federmann
|
Natalia Fedorova
|
Francisco Guzmán
|
Sergey Koshelev
|
Jean Maillard
|
Vukosi Marivate
|
Jonathan Mbuya
|
Alexandre Mourachko
|
Safiyyah Saleem
|
Holger Schwenk
|
Guillaume Wenzek
Proceedings of the Seventh Conference on Machine Translation (WMT)
We present the results of the WMT’22 SharedTask on Large-Scale Machine Translation Evaluation for African Languages. The shared taskincluded both a data and a systems track, alongwith additional innovations, such as a focus onAfrican languages and extensive human evaluation of submitted systems. We received 14system submissions from 8 teams, as well as6 data track contributions. We report a largeprogress in the quality of translation for Africanlanguages since the last iteration of this sharedtask: there is an increase of about 7.5 BLEUpoints across 72 language pairs, and the average BLEU scores went from 15.09 to 22.60.