Abstract
We present BrainT, a multi-class, averaged perceptron tested on implicit emotion prediction of tweets. We show that the dataset is linearly separable and explore ways in fine-tuning the baseline classifier. Our results indicate that the bag-of-words features benefit the model moderately and prediction can be improved with bigrams, trigrams, skip-one-tetragrams and POS-tags. Furthermore, we find preprocessing of the n-grams, including stemming, lowercasing, stopword filtering, emoji and emoticon conversion generally not useful. The model is trained on an annotated corpus of 153,383 tweets and predictions on the test data were submitted to the WASSA-2018 Implicit Emotion Shared Task. BrainT attained a Macro F-score of 0.63.- Anthology ID:
- W18-6235
- Volume:
- Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis
- Month:
- October
- Year:
- 2018
- Address:
- Brussels, Belgium
- Editors:
- Alexandra Balahur, Saif M. Mohammad, Veronique Hoste, Roman Klinger
- Venue:
- WASSA
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 243–247
- Language:
- URL:
- https://aclanthology.org/W18-6235
- DOI:
- 10.18653/v1/W18-6235
- Cite (ACL):
- Vachagan Gratian and Marina Haid. 2018. BrainT at IEST 2018: Fine-tuning Multiclass Perceptron For Implicit Emotion Classification. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 243–247, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- BrainT at IEST 2018: Fine-tuning Multiclass Perceptron For Implicit Emotion Classification (Gratian & Haid, WASSA 2018)
- PDF:
- https://preview.aclanthology.org/landing_page/W18-6235.pdf
- Code
- ims-teamlab2018/Braint