Embarrassingly Simple Unsupervised Aspect Extraction

Stéphan Tulkens, Andreas van Cranenburgh


Abstract
We present a simple but effective method for aspect identification in sentiment analysis. Our unsupervised method only requires word embeddings and a POS tagger, and is therefore straightforward to apply to new domains and languages. We introduce Contrastive Attention (CAt), a novel single-head attention mechanism based on an RBF kernel, which gives a considerable boost in performance and makes the model interpretable. Previous work relied on syntactic features and complex neural models. We show that given the simplicity of current benchmark datasets for aspect extraction, such complex models are not needed. The code to reproduce the experiments reported in this paper is available at https://github.com/clips/cat.
Anthology ID:
2020.acl-main.290
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3182–3187
Language:
URL:
https://aclanthology.org/2020.acl-main.290
DOI:
10.18653/v1/2020.acl-main.290
Bibkey:
Cite (ACL):
Stéphan Tulkens and Andreas van Cranenburgh. 2020. Embarrassingly Simple Unsupervised Aspect Extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3182–3187, Online. Association for Computational Linguistics.
Cite (Informal):
Embarrassingly Simple Unsupervised Aspect Extraction (Tulkens & van Cranenburgh, ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2020.acl-main.290.pdf
Video:
 http://slideslive.com/38929041
Code
 clips/cat