Abstract
One of the major downsides of Deep Learning is its supposed need for vast amounts of training data. As such, these techniques appear ill-suited for NLP areas where annotated data is limited, such as less-resourced languages or emotion analysis, with its many nuanced and hard-to-acquire annotation formats. We conduct a questionnaire study indicating that indeed the vast majority of researchers in emotion analysis deems neural models inferior to traditional machine learning when training data is limited. In stark contrast to those survey results, we provide empirical evidence for English, Polish, and Portuguese that commonly used neural architectures can be trained on surprisingly few observations, outperforming n-gram based ridge regression on only 100 data points. Our analysis suggests that high-quality, pre-trained word embeddings are a main factor for achieving those results.- Anthology ID:
- 2020.peoples-1.13
- Volume:
- Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media
- Month:
- December
- Year:
- 2020
- Address:
- Barcelona, Spain (Online)
- Venue:
- PEOPLES
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 129–139
- Language:
- URL:
- https://aclanthology.org/2020.peoples-1.13
- DOI:
- Cite (ACL):
- Sven Buechel, João Sedoc, H. Andrew Schwartz, and Lyle Ungar. 2020. Learning Emotion from 100 Observations: Unexpected Robustness of Deep Learning under Strong Data Limitations. In Proceedings of the Third Workshop on Computational Modeling of People's Opinions, Personality, and Emotion's in Social Media, pages 129–139, Barcelona, Spain (Online). Association for Computational Linguistics.
- Cite (Informal):
- Learning Emotion from 100 Observations: Unexpected Robustness of Deep Learning under Strong Data Limitations (Buechel et al., PEOPLES 2020)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2020.peoples-1.13.pdf