Learning when to skim and when to read

Alexander Johansen, Richard Socher


Abstract
Many recent advances in deep learning for natural language processing have come at increasing computational cost, but the power of these state-of-the-art models is not needed for every example in a dataset. We demonstrate two approaches to reducing unnecessary computation in cases where a fast but weak baseline classier and a stronger, slower model are both available. Applying an AUC-based metric to the task of sentiment classification, we find significant efficiency gains with both a probability-threshold method for reducing computational cost and one that uses a secondary decision network.
Anthology ID:
W17-2631
Volume:
Proceedings of the 2nd Workshop on Representation Learning for NLP
Month:
August
Year:
2017
Address:
Vancouver, Canada
Editors:
Phil Blunsom, Antoine Bordes, Kyunghyun Cho, Shay Cohen, Chris Dyer, Edward Grefenstette, Karl Moritz Hermann, Laura Rimell, Jason Weston, Scott Yih
Venue:
RepL4NLP
SIG:
SIGREP
Publisher:
Association for Computational Linguistics
Note:
Pages:
257–264
Language:
URL:
https://aclanthology.org/W17-2631
DOI:
10.18653/v1/W17-2631
Bibkey:
Cite (ACL):
Alexander Johansen and Richard Socher. 2017. Learning when to skim and when to read. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 257–264, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Learning when to skim and when to read (Johansen & Socher, RepL4NLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/W17-2631.pdf
Data
SST