Israa Alghanmi


2021

pdf bib
Probing Pre-Trained Language Models for Disease Knowledge
Israa Alghanmi | Luis Espinosa Anke | Steven Schockaert
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf bib
Combining BERT with Static Word Embeddings for Categorizing Social Media
Israa Alghanmi | Luis Espinosa Anke | Steven Schockaert
Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)

Pre-trained neural language models (LMs) have achieved impressive results in various natural language processing tasks, across different languages. Surprisingly, this extends to the social media genre, despite the fact that social media often has very different characteristics from the language that LMs have seen during training. A particularly striking example is the performance of AraBERT, an LM for the Arabic language, which is successful in categorizing social media posts in Arabic dialects, despite only having been trained on Modern Standard Arabic. Our hypothesis in this paper is that the performance of LMs for social media can nonetheless be improved by incorporating static word vectors that have been specifically trained on social media. We show that a simple method for incorporating such word vectors is indeed successful in several Arabic and English benchmarks. Curiously, however, we also find that similar improvements are possible with word vectors that have been trained on traditional text sources (e.g. Wikipedia).