Tao Zhu


2022

pdf
Parameter-efficient Continual Learning Framework in Industrial Real-time Text Classification System
Tao Zhu | Zhe Zhao | Weijie Liu | Jiachi Liu | Yiren Chen | Weiquan Mao | Haoyan Liu | Kunbo Ding | Yudong Li | Xuefeng Yang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track

Catastrophic forgetting is a challenge for model deployment in industrial real-time systems, which requires the model to quickly master a new task without forgetting the old one. Continual learning aims to solve this problem; however, it usually updates all the model parameters, resulting in extensive training times and the inability to deploy quickly. To address this challenge, we propose a parameter-efficient continual learning framework, in which efficient parameters are selected through an offline parameter selection strategy and then trained using an online regularization method. In our framework, only a few parameters need to be updated, which not only alleviates catastrophic forgetting, but also allows the model to be saved with the changed parameters instead of all parameters. Extensive experiments are conducted to examine the effectiveness of our proposal. We believe this paper will provide useful insights and experiences on developing deep learning-based online real-time systems.

pdf
一种非结构化数据表征增强的术后风险预测模型(An Unstructured Data Representation Enhanced Model for Postoperative Risk Prediction)
Yaqiang Wang (王亚强) | Xiao Yang (杨潇) | Xuechao Hao (郝学超) | Hongping Shu (舒红平) | Guo Chen (陈果) | Tao Zhu (朱涛)
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“准确的术后风险预测对临床资源规划和应急方案准备以及降低患者的术后风险和死亡率具有积极作用。术后风险预测目前主要基于术前和术中的患者基本信息、实验室检查、生命体征等结构化数据,而蕴含丰富语义信息的非结构化术前诊断的价值还有待验证。针对该问题,本文提出一种非结构化数据表征增强的术后风险预测模型,利用自注意力机制,精巧的将结构化数据与术前诊断数据进行信息加权融合。基于临床数据,将本文方法与术后风险预测常用的统计机器学习模型以及最新的深度神经网络进行对比,本文方法不仅提升了术后风险预测的性能,同时也为预测模型带来了良好的可解释性。”

pdf
A Simple and Effective Method to Improve Zero-Shot Cross-Lingual Transfer Learning
Kunbo Ding | Weijie Liu | Yuejian Fang | Weiquan Mao | Zhe Zhao | Tao Zhu | Haoyan Liu | Rong Tian | Yiren Chen
Proceedings of the 29th International Conference on Computational Linguistics

Existing zero-shot cross-lingual transfer methods rely on parallel corpora or bilingual dictionaries, which are expensive and impractical for low-resource languages. To disengage from these dependencies, researchers have explored training multilingual models on English-only resources and transferring them to low-resource languages. However, its effect is limited by the gap between embedding clusters of different languages. To address this issue, we propose Embedding-Push, Attention-Pull, and Robust targets to transfer English embeddings to virtual multilingual embeddings without semantic loss, thereby improving cross-lingual transferability. Experimental results on mBERT and XLM-R demonstrate that our method significantly outperforms previous works on the zero-shot cross-lingual text classification task and can obtain a better multilingual alignment.