Adapting Pretrained Text-to-Text Models for Long Text Sequences
Wenhan Xiong, Anchit Gupta, Shubham Toshniwal, Yashar Mehdad, Scott Yih
Abstract
We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline – model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with pooling-augmented blockwise attention, and pretrain the model with a masked-span prediction task with spans of varying lengths. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large open-domain corpus results in better performance than using existing long document corpora, which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on five long-text summarization datasets, often outperforming previous methods with larger model sizes.- Anthology ID:
- 2023.findings-emnlp.370
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5566–5578
- Language:
- URL:
- https://aclanthology.org/2023.findings-emnlp.370
- DOI:
- 10.18653/v1/2023.findings-emnlp.370
- Cite (ACL):
- Wenhan Xiong, Anchit Gupta, Shubham Toshniwal, Yashar Mehdad, and Scott Yih. 2023. Adapting Pretrained Text-to-Text Models for Long Text Sequences. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 5566–5578, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Adapting Pretrained Text-to-Text Models for Long Text Sequences (Xiong et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2023.findings-emnlp.370.pdf