Improving Neural Topic Models using Knowledge Distillation

Alexander Miserlis Hoyle, Pranav Goel, Philip Resnik


Abstract
Topic models are often used to identify human-interpretable topics to help make sense of large document collections. We use knowledge distillation to combine the best attributes of probabilistic topic models and pretrained transformers. Our modular method can be straightforwardly applied with any neural topic model to improve topic quality, which we demonstrate using two models having disparate architectures, obtaining state-of-the-art topic coherence. We show that our adaptable framework not only improves performance in the aggregate over all estimated topics, as is commonly reported, but also in head-to-head comparisons of aligned topics.
Anthology ID:
2020.emnlp-main.137
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1752–1771
Language:
URL:
https://aclanthology.org/2020.emnlp-main.137
DOI:
10.18653/v1/2020.emnlp-main.137
Bibkey:
Cite (ACL):
Alexander Miserlis Hoyle, Pranav Goel, and Philip Resnik. 2020. Improving Neural Topic Models using Knowledge Distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1752–1771, Online. Association for Computational Linguistics.
Cite (Informal):
Improving Neural Topic Models using Knowledge Distillation (Hoyle et al., EMNLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2020.emnlp-main.137.pdf
Video:
 https://slideslive.com/38939229
Code
 ahoho/kd-topic-models
Data
IMDb Movie Reviews