Adversarial Multi-task Learning for Text Classification

Pengfei Liu, Xipeng Qiu, Xuanjing Huang


Abstract
Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features. However, in most existing approaches, the extracted shared features are prone to be contaminated by task-specific features or the noise brought by other tasks. In this paper, we propose an adversarial multi-task learning framework, alleviating the shared and private latent feature spaces from interfering with each other. We conduct extensive experiments on 16 different text classification tasks, which demonstrates the benefits of our approach. Besides, we show that the shared knowledge learned by our proposed model can be regarded as off-the-shelf knowledge and easily transferred to new tasks. The datasets of all 16 tasks are publicly available at http://nlp.fudan.edu.cn/data/.
Anthology ID:
P17-1001
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Editors:
Regina Barzilay, Min-Yen Kan
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–10
Language:
URL:
https://aclanthology.org/P17-1001
DOI:
10.18653/v1/P17-1001
Bibkey:
Cite (ACL):
Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2017. Adversarial Multi-task Learning for Text Classification. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1–10, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Adversarial Multi-task Learning for Text Classification (Liu et al., ACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/ml4al-ingestion/P17-1001.pdf
Video:
 https://preview.aclanthology.org/ml4al-ingestion/P17-1001.mp4
Data
IMDb Movie Reviews