Abstract
Recently, the relationship between automated and human evaluation of topic models has been called into question. Method developers have staked the efficacy of new topic model variants on automated measures, and their failure to approximate human preferences places these models on uncertain ground. Moreover, existing evaluation paradigms are often divorced from real-world use. Motivated by content analysis as a dominant real-world use case for topic modeling, we analyze two related aspects of topic models that affect their effectiveness and trustworthiness in practice for that purpose: the stability of their estimates and the extent to which the model’s discovered categories align with human-determined categories in the data. We find that neural topic models fare worse in both respects compared to an established classical method. We take a step toward addressing both issues in tandem by demonstrating that a straightforward ensembling method can reliably outperform the members of the ensemble.- Anthology ID:
- 2022.findings-emnlp.390
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2022
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5321–5344
- Language:
- URL:
- https://aclanthology.org/2022.findings-emnlp.390
- DOI:
- 10.18653/v1/2022.findings-emnlp.390
- Cite (ACL):
- Alexander Miserlis Hoyle, Pranav Goel, Rupak Sarkar, and Philip Resnik. 2022. Are Neural Topic Models Broken?. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5321–5344, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Are Neural Topic Models Broken? (Hoyle et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/alta-23-ingestion/2022.findings-emnlp.390.pdf