A Sense-Topic Model for Word Sense Induction with Unsupervised Data Enrichment

Jing Wang, Mohit Bansal, Kevin Gimpel, Brian D. Ziebart, Clement T. Yu


Abstract
Word sense induction (WSI) seeks to automatically discover the senses of a word in a corpus via unsupervised methods. We propose a sense-topic model for WSI, which treats sense and topic as two separate latent variables to be inferred jointly. Topics are informed by the entire document, while senses are informed by the local context surrounding the ambiguous word. We also discuss unsupervised ways of enriching the original corpus in order to improve model performance, including using neural word embeddings and external corpora to expand the context of each data instance. We demonstrate significant improvements over the previous state-of-the-art, achieving the best results reported to date on the SemEval-2013 WSI task.
Anthology ID:
Q15-1005
Volume:
Transactions of the Association for Computational Linguistics, Volume 3
Month:
Year:
2015
Address:
Cambridge, MA
Editors:
Michael Collins, Lillian Lee
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
59–71
Language:
URL:
https://aclanthology.org/Q15-1005
DOI:
10.1162/tacl_a_00122
Bibkey:
Cite (ACL):
Jing Wang, Mohit Bansal, Kevin Gimpel, Brian D. Ziebart, and Clement T. Yu. 2015. A Sense-Topic Model for Word Sense Induction with Unsupervised Data Enrichment. Transactions of the Association for Computational Linguistics, 3:59–71.
Cite (Informal):
A Sense-Topic Model for Word Sense Induction with Unsupervised Data Enrichment (Wang et al., TACL 2015)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/Q15-1005.pdf