Benchmarking of Transformer-Based Pre-Trained Models on Social Media Text Classification Datasets

Yuting Guo, Xiangjue Dong, Mohammed Ali Al-Garadi, Abeed Sarker, Cecile Paris, Diego Mollá Aliod


Abstract
Free text data from social media is now widely used in natural language processing research, and one of the most common machine learning tasks performed on this data is classification. Generally speaking, performances of supervised classification algorithms on social media datasets are lower than those on texts from other sources, but recently-proposed transformer-based models have considerably improved upon legacy state-of-the-art systems. Currently, there is no study that compares the performances of different variants of transformer-based models on a wide range of social media text classification datasets. In this paper, we benchmark the performances of transformer-based pre-trained models on 25 social media text classification datasets, 6 of which are health-related. We compare three pre-trained language models, RoBERTa-base, BERTweet and ClinicalBioBERT in terms of classification accuracy. Our experiments show that RoBERTa-base and BERTweet perform comparably on most datasets, and considerably better than ClinicalBioBERT, even on health-related datasets.
Anthology ID:
2020.alta-1.10
Volume:
Proceedings of the The 18th Annual Workshop of the Australasian Language Technology Association
Month:
December
Year:
2020
Address:
Virtual Workshop
Venue:
ALTA
SIG:
Publisher:
Australasian Language Technology Association
Note:
Pages:
86–91
Language:
URL:
https://aclanthology.org/2020.alta-1.10
DOI:
Bibkey:
Cite (ACL):
Yuting Guo, Xiangjue Dong, Mohammed Ali Al-Garadi, Abeed Sarker, Cecile Paris, and Diego Mollá Aliod. 2020. Benchmarking of Transformer-Based Pre-Trained Models on Social Media Text Classification Datasets. In Proceedings of the The 18th Annual Workshop of the Australasian Language Technology Association, pages 86–91, Virtual Workshop. Australasian Language Technology Association.
Cite (Informal):
Benchmarking of Transformer-Based Pre-Trained Models on Social Media Text Classification Datasets (Guo et al., ALTA 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2020.alta-1.10.pdf
Data
OLID