Hyung Chung
2023
Transcending Scaling Laws with 0.1% Extra Compute
Yi Tay
|
Jason Wei
|
Hyung Chung
|
Vinh Tran
|
David So
|
Siamak Shakeri
|
Xavier Garcia
|
Steven Zheng
|
Jinfeng Rao
|
Aakanksha Chowdhery
|
Denny Zhou
|
Donald Metzler
|
Slav Petrov
|
Neil Houlsby
|
Quoc Le
|
Mostafa Dehghani
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Scaling language models improves performance but comes with significant computational costs. This paper proposes UL2R, a method that substantially improves existing language models and their scaling curves with a relatively tiny amount of extra compute. The key idea is to continue training a state-of-the-art large language model on a few more steps with UL2’s mixture-of-denoiser objective. We show that, with almost negligible extra computational costs and no new sources of data, we are able to substantially improve the scaling properties of large language models on downstream metrics. In this paper, we continue training a baseline language model, PaLM, with ULR2, introducing a new set of models at 8B, 62B, and 540B scale which we call U-PaLM. Impressively, at 540B scale, we show an approximately 2x computational savings rate where U-PaLM achieves the same performance as the final PaLM 540B model at around half its computational budget (i.e., saving ~4.4 million TPUv4 hours). We further show that this improved scaling curve leads to “emergent abilities” on challenging BIG-Bench tasks—for instance, U-PaLM does much better on some tasks or demonstrates better quality at much smaller scale (62B as opposed to 540B). Overall, we show that U-PaLM outperforms PaLM on many few-shot setups, including reasoning tasks with chain-of-thought (e.g., GSM8K), multilingual tasks (MGSM, TydiQA), MMLU and challenging BIG-Bench tasks.
Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?
Yi Tay
|
Mostafa Dehghani
|
Samira Abnar
|
Hyung Chung
|
William Fedus
|
Jinfeng Rao
|
Sharan Narang
|
Vinh Tran
|
Dani Yogatama
|
Donald Metzler
Findings of the Association for Computational Linguistics: EMNLP 2023
There have been a lot of interest in the scaling properties of Transformer models. However, not much has been done on the front of investigating the effect of scaling properties of different inductive biases and model architectures. Do model architectures scale differently? If so, how does inductive bias affect scaling behaviour? How does this influence upstream (pretraining) and downstream (transfer)? This paper conducts a systematic study of scaling behaviour of ten diverse model architectures such as Transformers, Switch Transformers, Universal Transformers, Dynamic convolutions, Performers, and recently proposed MLP-Mixers. Via extensive experiments, we show that (1) architecture is an indeed an important consideration when performing scaling and (2) the best performing model can fluctuate at different scales. We believe that the findings outlined in this work has significant implications to how model architectures are currently evaluated in the community.
Search
Co-authors
- Aakanksha Chowdhery 1
- Dani Yogatama 1
- David So 1
- Denny Zhou 1
- Donald Metzler 2
- show all...