Improving Contextualized Topic Models with Negative Sampling

Suman Adhya, Avishek Lahiri, Debarshi Kumar Sanyal, Partha Pratim Das


Abstract
Topic modeling has emerged as a dominant method for exploring large document collections. Recent approaches to topic modeling use large contextualized language models and variational autoencoders. In this paper, we propose a negative sampling mechanism for a contextualized topic model to improve the quality of the generated topics. In particular, during model training, we perturb the generated document-topic vector and use a triplet loss to encourage the document reconstructed from the correct document-topic vector to be similar to the input document and dissimilar to the document reconstructed from the perturbed vector. Experiments for different topic counts on three publicly available benchmark datasets show that in most cases, our approach leads to an increase in topic coherence over that of the baselines. Our model also achieves very high topic diversity.
Anthology ID:
2022.icon-main.18
Volume:
Proceedings of the 19th International Conference on Natural Language Processing (ICON)
Month:
December
Year:
2022
Address:
New Delhi, India
Editors:
Md. Shad Akhtar, Tanmoy Chakraborty
Venue:
ICON
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
128–138
Language:
URL:
https://aclanthology.org/2022.icon-main.18
DOI:
Bibkey:
Cite (ACL):
Suman Adhya, Avishek Lahiri, Debarshi Kumar Sanyal, and Partha Pratim Das. 2022. Improving Contextualized Topic Models with Negative Sampling. In Proceedings of the 19th International Conference on Natural Language Processing (ICON), pages 128–138, New Delhi, India. Association for Computational Linguistics.
Cite (Informal):
Improving Contextualized Topic Models with Negative Sampling (Adhya et al., ICON 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2022.icon-main.18.pdf