Deep Automated Multi-task Learning

Davis Liang, Yan Shu


Abstract
Multi-task learning (MTL) has recently contributed to learning better representations in service of various NLP tasks. MTL aims at improving the performance of a primary task by jointly training on a secondary task. This paper introduces automated tasks, which exploit the sequential nature of the input data, as secondary tasks in an MTL model. We explore next word prediction, next character prediction, and missing word completion as potential automated tasks. Our results show that training on a primary task in parallel with a secondary automated task improves both the convergence speed and accuracy for the primary task. We suggest two methods for augmenting an existing network with automated tasks and establish better performance in topic prediction, sentiment analysis, and hashtag recommendation. Finally, we show that the MTL models can perform well on datasets that are small and colloquial by nature.
Anthology ID:
I17-2010
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Editors:
Greg Kondrak, Taro Watanabe
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
55–60
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/I17-2010/
DOI:
Bibkey:
Cite (ACL):
Davis Liang and Yan Shu. 2017. Deep Automated Multi-task Learning. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 55–60, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
Deep Automated Multi-task Learning (Liang & Shu, IJCNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/I17-2010.pdf
Data
AG News