Multi-Task Label Embedding for Text Classification
Honglun Zhang, Liqiang Xiao, Wenqing Chen, Yongkun Wang, Yaohui Jin
Abstract
Multi-task learning in text classification leverages implicit correlations among related tasks to extract common features and yield performance gains. However, a large body of previous work treats labels of each task as independent and meaningless one-hot vectors, which cause a loss of potential label information. In this paper, we propose Multi-Task Label Embedding to convert labels in text classification into semantic vectors, thereby turning the original tasks into vector matching tasks. Our model utilizes semantic correlations among tasks and makes it convenient to scale or transfer when new tasks are involved. Extensive experiments on five benchmark datasets for text classification show that our model can effectively improve the performances of related tasks with semantic representations of labels and additional information from each other.- Anthology ID:
- D18-1484
- Volume:
- Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
- Month:
- October-November
- Year:
- 2018
- Address:
- Brussels, Belgium
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4545–4553
- Language:
- URL:
- https://aclanthology.org/D18-1484
- DOI:
- 10.18653/v1/D18-1484
- Cite (ACL):
- Honglun Zhang, Liqiang Xiao, Wenqing Chen, Yongkun Wang, and Yaohui Jin. 2018. Multi-Task Label Embedding for Text Classification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4545–4553, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- Multi-Task Label Embedding for Text Classification (Zhang et al., EMNLP 2018)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/D18-1484.pdf
- Data
- IMDb Movie Reviews, SST