Hejia Zhang
2024
Effective Long-Context Scaling of Foundation Models
Wenhan Xiong
|
Jingyu Liu
|
Igor Molybog
|
Hejia Zhang
|
Prajjwal Bhargava
|
Rui Hou
|
Louis Martin
|
Rashi Rungta
|
Karthik Abinav Sankararaman
|
Barlas Oguz
|
Madian Khabsa
|
Han Fang
|
Yashar Mehdad
|
Sharan Narang
|
Kshitiz Malik
|
Angela Fan
|
Shruti Bhosale
|
Sergey Edunov
|
Mike Lewis
|
Sinong Wang
|
Hao Ma
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
We present an effective recipe to train strong long-context LLMs that are capable of utilizing massive context windows of up to 32,000 tokens. Our models are built through continual pretraining from Llama 2 checkpoints with longer text sequences and on a dataset where long texts are upsampled. We perform extensive evaluation using language modeling, synthetic context probing tasks, and a wide range of downstream benchmarks. Across all evaluations, our models achieve consistent improvements on most regular-context tasks and significant improvements on long-context tasks over Llama 2. Moreover, with a cost-effective instruction tuning procedure that is free of expensive annotation, the presented models can already surpass gpt-3.5-turbo-16k‘s overall performance on long-context benchmarks. Alongside these results, we provide an in-depth analysis on each individual component of our method. We delve into Llama’s position encodings and discuss its key limitation in modeling long data. We examine the impact of various design choices in the pretraining process, including the data mix and the training curriculum of sequence lengths – ablation results suggest that having abundant long texts in the pretrain dataset is not the key to achieving strong performance, and we empirically verify that long context continual pretraining is more efficient and similarly effective compared to pretraining from scratch with long sequences.
Search
Co-authors
- Wenhan Xiong 1
- Jingyu Liu 1
- Igor Molybog 1
- Prajjwal Bhargava 1
- Rui Hou 1
- show all...