An Empirical Survey of Unsupervised Text Representation Methods on Twitter Data
Lili Wang, Chongyang Gao, Jason Wei, Weicheng Ma, Ruibo Liu, Soroush Vosoughi
Abstract
The field of NLP has seen unprecedented achievements in recent years. Most notably, with the advent of large-scale pre-trained Transformer-based language models, such as BERT, there has been a noticeable improvement in text representation. It is, however, unclear whether these improvements translate to noisy user-generated text, such as tweets. In this paper, we present an experimental survey of a wide range of well-known text representation techniques for the task of text clustering on noisy Twitter data. Our results indicate that the more advanced models do not necessarily work best on tweets and that more exploration in this area is needed.- Anthology ID:
- 2020.wnut-1.27
- Volume:
- Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
- Venue:
- WNUT
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 209–214
- Language:
- URL:
- https://aclanthology.org/2020.wnut-1.27
- DOI:
- 10.18653/v1/2020.wnut-1.27
- Cite (ACL):
- Lili Wang, Chongyang Gao, Jason Wei, Weicheng Ma, Ruibo Liu, and Soroush Vosoughi. 2020. An Empirical Survey of Unsupervised Text Representation Methods on Twitter Data. In Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), pages 209–214, Online. Association for Computational Linguistics.
- Cite (Informal):
- An Empirical Survey of Unsupervised Text Representation Methods on Twitter Data (Wang et al., WNUT 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2020.wnut-1.27.pdf