Adapting Topic Models using Lexical Associations with Tree Priors

Weiwei Yang, Jordan Boyd-Graber, Philip Resnik


Abstract
Models work best when they are optimized taking into account the evaluation criteria that people care about. For topic models, people often care about interpretability, which can be approximated using measures of lexical association. We integrate lexical association into topic optimization using tree priors, which provide a flexible framework that can take advantage of both first order word associations and the higher-order associations captured by word embeddings. Tree priors improve topic interpretability without hurting extrinsic performance.
Anthology ID:
D17-1203
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1901–1906
Language:
URL:
https://aclanthology.org/D17-1203
DOI:
10.18653/v1/D17-1203
Bibkey:
Cite (ACL):
Weiwei Yang, Jordan Boyd-Graber, and Philip Resnik. 2017. Adapting Topic Models using Lexical Associations with Tree Priors. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1901–1906, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Adapting Topic Models using Lexical Associations with Tree Priors (Yang et al., EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/D17-1203.pdf