Abstract
Distributed representations of sentences have been developed recently to represent their meaning as real-valued vectors. However, it is not clear how much information such representations retain about the polarity of sentences. To study this question, we decode sentiment from unsupervised sentence representations learned with different architectures (sensitive to the order of words, the order of sentences, or none) in 9 typologically diverse languages. Sentiment results from the (recursive) composition of lexical items and grammatical strategies such as negation and concession. The results are manifold: we show that there is no ‘one-size-fits-all’ representation architecture outperforming the others across the board. Rather, the top-ranking architectures depend on the language at hand. Moreover, we find that in several cases the additive composition model based on skip-gram word vectors may surpass supervised state-of-art architectures such as bi-directional LSTMs. Finally, we provide a possible explanation of the observed variation based on the type of negative constructions in each language.- Anthology ID:
- S17-1003
- Volume:
- Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017)
- Month:
- August
- Year:
- 2017
- Address:
- Vancouver, Canada
- Venue:
- *SEM
- SIGs:
- SIGSEM | SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 22–32
- Language:
- URL:
- https://aclanthology.org/S17-1003
- DOI:
- 10.18653/v1/S17-1003
- Cite (ACL):
- Edoardo Maria Ponti, Ivan Vulić, and Anna Korhonen. 2017. Decoding Sentiment from Distributed Representations of Sentences. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (*SEM 2017), pages 22–32, Vancouver, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Decoding Sentiment from Distributed Representations of Sentences (Ponti et al., *SEM 2017)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/S17-1003.pdf