Is Neural Topic Modelling Better than Clustering? An Empirical Study on Clustering with Contextual Embeddings for Topics

Zihan Zhang, Meng Fang, Ling Chen, Mohammad Reza Namazi Rad


Abstract
Recent work incorporates pre-trained word embeddings such as BERT embeddings into Neural Topic Models (NTMs), generating highly coherent topics. However, with high-quality contextualized document representations, do we really need sophisticated neural models to obtain coherent and interpretable topics? In this paper, we conduct thorough experiments showing that directly clustering high-quality sentence embeddings with an appropriate word selecting method can generate more coherent and diverse topics than NTMs, achieving also higher efficiency and simplicity.
Anthology ID:
2022.naacl-main.285
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3886–3893
Language:
URL:
https://aclanthology.org/2022.naacl-main.285
DOI:
10.18653/v1/2022.naacl-main.285
Bibkey:
Cite (ACL):
Zihan Zhang, Meng Fang, Ling Chen, and Mohammad Reza Namazi Rad. 2022. Is Neural Topic Modelling Better than Clustering? An Empirical Study on Clustering with Contextual Embeddings for Topics. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3886–3893, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Is Neural Topic Modelling Better than Clustering? An Empirical Study on Clustering with Contextual Embeddings for Topics (Zhang et al., NAACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.naacl-main.285.pdf
Video:
 https://preview.aclanthology.org/auto-file-uploads/2022.naacl-main.285.mp4
Code
 hyintell/topicx