Abstract
Fine-tuning with pre-trained language models (e.g. BERT) has achieved great success in many language understanding tasks in supervised settings (e.g. text classification). However, relatively little work has been focused on applying pre-trained models in unsupervised settings, such as text clustering. In this paper, we propose a novel method to fine-tune pre-trained models unsupervisedly for text clustering, which simultaneously learns text representations and cluster assignments using a clustering oriented loss. Experiments on three text clustering datasets (namely TREC-6, Yelp, and DBpedia) show that our model outperforms the baseline methods and achieves state-of-the-art results.- Anthology ID:
- 2020.coling-main.482
- Volume:
- Proceedings of the 28th International Conference on Computational Linguistics
- Month:
- December
- Year:
- 2020
- Address:
- Barcelona, Spain (Online)
- Editors:
- Donia Scott, Nuria Bel, Chengqing Zong
- Venue:
- COLING
- SIG:
- Publisher:
- International Committee on Computational Linguistics
- Note:
- Pages:
- 5530–5534
- Language:
- URL:
- https://aclanthology.org/2020.coling-main.482
- DOI:
- 10.18653/v1/2020.coling-main.482
- Cite (ACL):
- Shaohan Huang, Furu Wei, Lei Cui, Xingxing Zhang, and Ming Zhou. 2020. Unsupervised Fine-tuning for Text Clustering. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5530–5534, Barcelona, Spain (Online). International Committee on Computational Linguistics.
- Cite (Informal):
- Unsupervised Fine-tuning for Text Clustering (Huang et al., COLING 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2020.coling-main.482.pdf