Refining Word Embeddings for Sentiment Analysis

Liang-Chih Yu, Jin Wang, K. Robert Lai, Xuejie Zhang


Abstract
Word embeddings that can capture semantic and syntactic information from contexts have been extensively used for various natural language processing tasks. However, existing methods for learning context-based word embeddings typically fail to capture sufficient sentiment information. This may result in words with similar vector representations having an opposite sentiment polarity (e.g., good and bad), thus degrading sentiment analysis performance. Therefore, this study proposes a word vector refinement model that can be applied to any pre-trained word vectors (e.g., Word2vec and GloVe). The refinement model is based on adjusting the vector representations of words such that they can be closer to both semantically and sentimentally similar words and further away from sentimentally dissimilar words. Experimental results show that the proposed method can improve conventional word embeddings and outperform previously proposed sentiment embeddings for both binary and fine-grained classification on Stanford Sentiment Treebank (SST).
Anthology ID:
D17-1056
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
534–539
Language:
URL:
https://aclanthology.org/D17-1056
DOI:
10.18653/v1/D17-1056
Bibkey:
Cite (ACL):
Liang-Chih Yu, Jin Wang, K. Robert Lai, and Xuejie Zhang. 2017. Refining Word Embeddings for Sentiment Analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 534–539, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Refining Word Embeddings for Sentiment Analysis (Yu et al., EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/D17-1056.pdf
Data
SST