Very Deep Convolutional Networks for Text Classification

Alexis Conneau, Holger Schwenk, Loïc Barrault, Yann Lecun


Abstract
The dominant approach for many NLP tasks are recurrent neural networks, in particular LSTMs, and convolutional neural networks. However, these architectures are rather shallow in comparison to the deep convolutional networks which have pushed the state-of-the-art in computer vision. We present a new architecture (VDCNN) for text processing which operates directly at the character level and uses only small convolutions and pooling operations. We are able to show that the performance of this model increases with the depth: using up to 29 convolutional layers, we report improvements over the state-of-the-art on several public text classification tasks. To the best of our knowledge, this is the first time that very deep convolutional nets have been applied to text processing.
Anthology ID:
E17-1104
Volume:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Mirella Lapata, Phil Blunsom, Alexander Koller
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1107–1116
Language:
URL:
https://aclanthology.org/E17-1104
DOI:
Bibkey:
Cite (ACL):
Alexis Conneau, Holger Schwenk, Loïc Barrault, and Yann Lecun. 2017. Very Deep Convolutional Networks for Text Classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1107–1116, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Very Deep Convolutional Networks for Text Classification (Conneau et al., EACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/E17-1104.pdf
Code
 additional community code
Data
AG NewsDBpediaYahoo! AnswersYelp Review Polarity