BERTweet: A pre-trained language model for English Tweets

Dat Quoc Nguyen, Thanh Vu, Anh Tuan Nguyen


Abstract
We present BERTweet, the first public large-scale pre-trained language model for English Tweets. Our BERTweet, having the same architecture as BERT-base (Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu et al., 2019). Experiments show that BERTweet outperforms strong baselines RoBERTa-base and XLM-R-base (Conneau et al., 2020), producing better performance results than the previous state-of-the-art models on three Tweet NLP tasks: Part-of-speech tagging, Named-entity recognition and text classification. We release BERTweet under the MIT License to facilitate future research and applications on Tweet data. Our BERTweet is available at https://github.com/VinAIResearch/BERTweet
Anthology ID:
2020.emnlp-demos.2
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Month:
October
Year:
2020
Address:
Online
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9–14
Language:
URL:
https://aclanthology.org/2020.emnlp-demos.2
DOI:
10.18653/v1/2020.emnlp-demos.2
Bibkey:
Cite (ACL):
Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. BERTweet: A pre-trained language model for English Tweets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 9–14, Online. Association for Computational Linguistics.
Cite (Informal):
BERTweet: A pre-trained language model for English Tweets (Nguyen et al., EMNLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/starsem-semeval-split/2020.emnlp-demos.2.pdf
Code
 VinAIResearch/BERTweet +  additional community code
Data
TweebankTweetEvalWNUT 2016 NERWNUT 2017