Abstract
Recurrent Neural Networks are showing much promise in many sub-areas of natural language processing, ranging from document classification to machine translation to automatic question answering. Despite their promise, many recurrent models have to read the whole text word by word, making it slow to handle long documents. For example, it is difficult to use a recurrent network to read a book and answer questions about it. In this paper, we present an approach of reading text while skipping irrelevant information if needed. The underlying model is a recurrent network that learns how far to jump after reading a few words of the input text. We employ a standard policy gradient method to train the model to make discrete jumping decisions. In our benchmarks on four different tasks, including number prediction, sentiment analysis, news article classification and automatic Q&A, our proposed model, a modified LSTM with jumping, is up to 6 times faster than the standard sequential LSTM, while maintaining the same or even better accuracy.- Anthology ID:
- P17-1172
- Volume:
- Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- July
- Year:
- 2017
- Address:
- Vancouver, Canada
- Editors:
- Regina Barzilay, Min-Yen Kan
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1880–1890
- Language:
- URL:
- https://aclanthology.org/P17-1172
- DOI:
- 10.18653/v1/P17-1172
- Cite (ACL):
- Adams Wei Yu, Hongrae Lee, and Quoc Le. 2017. Learning to Skim Text. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1880–1890, Vancouver, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Learning to Skim Text (Yu et al., ACL 2017)
- PDF:
- https://preview.aclanthology.org/naacl24-info/P17-1172.pdf
- Code
- additional community code
- Data
- AG News, CBT, Children's Book Test, IMDb Movie Reviews