Xuanting Chen
2022
Making Parameter-efficient Tuning More Efficient: A Unified Framework for Classification Tasks
Xin Zhou
|
Ruotian Ma
|
Yicheng Zou
|
Xuanting Chen
|
Tao Gui
|
Qi Zhang
|
Xuanjing Huang
|
Rui Xie
|
Wei Wu
Proceedings of the 29th International Conference on Computational Linguistics
Large pre-trained language models (PLMs) have demonstrated superior performance in industrial applications. Recent studies have explored parameter-efficient PLM tuning, which only updates a small amount of task-specific parameters while achieving both high efficiency and comparable performance against standard fine-tuning. However, all these methods ignore the inefficiency problem caused by the task-specific output layers, which is inflexible for us to re-use PLMs and introduces non-negligible parameters. In this work, we focus on the text classification task and propose plugin-tuning, a framework that further improves the efficiency of existing parameter-efficient methods with a unified classifier. Specifically, we re-formulate both token and sentence classification tasks into a unified language modeling task, and map label spaces of different tasks into the same vocabulary space. In this way, we can directly re-use the language modeling heads of PLMs, avoiding introducing extra parameters for different tasks. We conduct experiments on six classification benchmarks. The experimental results show that plugin-tuning can achieve comparable performance against fine-tuned PLMs, while further saving around 50% parameters on top of other parameter-efficient methods.
Search
Co-authors
- Xin Zhou 1
- Ruotian Ma 1
- Yicheng Zou 1
- Tao Gui 1
- Qi Zhang 1
- show all...