Yuren Mao
2021
BanditMTL: Bandit-based Multi-task Learning for Text Classification
Yuren Mao
|
Zekai Wang
|
Weiwei Liu
|
Xuemin Lin
|
Wenbin Hu
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Task variance regularization, which can be used to improve the generalization of Multi-task Learning (MTL) models, remains unexplored in multi-task text classification. Accordingly, to fill this gap, this paper investigates how the task might be effectively regularized, and consequently proposes a multi-task learning method based on adversarial multi-armed bandit. The proposed method, named BanditMTL, regularizes the task variance by means of a mirror gradient ascent-descent algorithm. Adopting BanditMTL in the multi-task text classification context is found to achieve state-of-the-art performance. The results of extensive experiments back up our theoretical analysis and validate the superiority of our proposals.
2020
Tchebycheff Procedure for Multi-task Text Classification
Yuren Mao
|
Shuang Yun
|
Weiwei Liu
|
Bo Du
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Multi-task Learning methods have achieved great progress in text classification. However, existing methods assume that multi-task text classification problems are convex multiobjective optimization problems, which is unrealistic in real-world applications. To address this issue, this paper presents a novel Tchebycheff procedure to optimize the multi-task classification problems without convex assumption. The extensive experiments back up our theoretical analysis and validate the superiority of our proposals.
Search
Co-authors
- Weiwei Liu 2
- Zekai Wang 1
- Xuemin Lin 1
- Wenbin Hu 1
- Shuang Yun 1
- show all...
- Bo Du 1
Venues
- ACL2