Abstract
Pre-trained neural language models (LMs) have achieved impressive results in various natural language processing tasks, across different languages. Surprisingly, this extends to the social media genre, despite the fact that social media often has very different characteristics from the language that LMs have seen during training. A particularly striking example is the performance of AraBERT, an LM for the Arabic language, which is successful in categorizing social media posts in Arabic dialects, despite only having been trained on Modern Standard Arabic. Our hypothesis in this paper is that the performance of LMs for social media can nonetheless be improved by incorporating static word vectors that have been specifically trained on social media. We show that a simple method for incorporating such word vectors is indeed successful in several Arabic and English benchmarks. Curiously, however, we also find that similar improvements are possible with word vectors that have been trained on traditional text sources (e.g. Wikipedia).- Anthology ID:
- 2020.wnut-1.5
- Volume:
- Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
- Venue:
- WNUT
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 28–33
- Language:
- URL:
- https://aclanthology.org/2020.wnut-1.5
- DOI:
- 10.18653/v1/2020.wnut-1.5
- Cite (ACL):
- Israa Alghanmi, Luis Espinosa Anke, and Steven Schockaert. 2020. Combining BERT with Static Word Embeddings for Categorizing Social Media. In Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), pages 28–33, Online. Association for Computational Linguistics.
- Cite (Informal):
- Combining BERT with Static Word Embeddings for Categorizing Social Media (Alghanmi et al., WNUT 2020)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/2020.wnut-1.5.pdf