Smaller Text Classifiers with Discriminative Cluster Embeddings

Mingda Chen, Kevin Gimpel


Abstract
Word embedding parameters often dominate overall model sizes in neural methods for natural language processing. We reduce deployed model sizes of text classifiers by learning a hard word clustering in an end-to-end manner. We use the Gumbel-Softmax distribution to maximize over the latent clustering while minimizing the task loss. We propose variations that selectively assign additional parameters to words, which further improves accuracy while still remaining parameter-efficient.
Anthology ID:
N18-2116
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Marilyn Walker, Heng Ji, Amanda Stent
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
739–745
Language:
URL:
https://aclanthology.org/N18-2116
DOI:
10.18653/v1/N18-2116
Bibkey:
Cite (ACL):
Mingda Chen and Kevin Gimpel. 2018. Smaller Text Classifiers with Discriminative Cluster Embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 739–745, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Smaller Text Classifiers with Discriminative Cluster Embeddings (Chen & Gimpel, NAACL 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/N18-2116.pdf
Code
 mingdachen/word-cluster-embedding
Data
AG NewsIMDb Movie ReviewsYelp Review Polarity