Po-Hsun Liao
2022
Enhancing Chinese Multi-Label Text Classification Performance with Response-based Knowledge Distillation
Szu-Chi Huang
|
Cheng-Fu Cao
|
Po-Hsun Liao
|
Lung-Hao Lee
|
Po-Lei Lee
|
Kuo-Kai Shyu
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)
It’s difficult to optimize individual label performance of multi-label text classification, especially in those imbalanced data containing long-tailed labels. Therefore, this study proposes a response-based knowledge distillation mechanism comprising a teacher model that optimizes binary classifiers of the corresponding labels and a student model that is a standalone multi-label classifier learning from distilled knowledge passed by the teacher model. A total of 2,724 Chinese healthcare texts were collected and manually annotated across nine defined labels, resulting in 8731 labels, each containing an average of 3.2 labels. We used 5-fold cross-validation to compare the performance of several multi-label models, including TextRNN, TextCNN, HAN, and GRU-att. Experimental results indicate that using the proposed knowledge distillation mechanism effectively improved the performance no matter which model was used, about 2-3% of micro-F1, 4-6% of macro-F1, 3-4% of weighted-F1 and 1-2% of subset accuracy for performance enhancement.
Search