Heterogeneous Supervised Topic Models

Dhanya Sridhar, Hal Daumé III, David Blei


Abstract
Researchers in the social sciences are often interested in the relationship between text and an outcome of interest, where the goal is to both uncover latent patterns in the text and predict outcomes for unseen texts. To this end, this paper develops the heterogeneous supervised topic model (HSTM), a probabilistic approach to text analysis and prediction. HSTMs posit a joint model of text and outcomes to find heterogeneous patterns that help with both text analysis and prediction. The main benefit of HSTMs is that they capture heterogeneity in the relationship between text and the outcome across latent topics. To fit HSTMs, we develop a variational inference algorithm based on the auto-encoding variational Bayes framework. We study the performance of HSTMs on eight datasets and find that they consistently outperform related methods, including fine-tuned black-box models. Finally, we apply HSTMs to analyze news articles labeled with pro- or anti-tone. We find evidence of differing language used to signal a pro- and anti-tone.
Anthology ID:
2022.tacl-1.42
Volume:
Transactions of the Association for Computational Linguistics, Volume 10
Month:
Year:
2022
Address:
Cambridge, MA
Editors:
Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
732–745
Language:
URL:
https://aclanthology.org/2022.tacl-1.42
DOI:
10.1162/tacl_a_00487
Bibkey:
Cite (ACL):
Dhanya Sridhar, Hal Daumé III, and David Blei. 2022. Heterogeneous Supervised Topic Models. Transactions of the Association for Computational Linguistics, 10:732–745.
Cite (Informal):
Heterogeneous Supervised Topic Models (Sridhar et al., TACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2022.tacl-1.42.pdf
Video:
 https://preview.aclanthology.org/dois-2013-emnlp/2022.tacl-1.42.mp4