Ningning Wang
2022
ClusterFormer: Neural Clustering Attention for Efficient and Effective Transformer
Ningning Wang
|
Guobing Gan
|
Peng Zhang
|
Shuai Zhang
|
Junqiu Wei
|
Qun Liu
|
Xin Jiang
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Recently, a lot of research has been carried out to improve the efficiency of Transformer. Among them, the sparse pattern-based method is an important branch of efficient Transformers. However, some existing sparse methods usually use fixed patterns to select words, without considering similarities between words. Other sparse methods use clustering patterns to select words, but the clustering process is separate from the training process of the target task, which causes a decrease in effectiveness. To address these limitations, we design a neural clustering method, which can be seamlessly integrated into the Self-Attention Mechanism in Transformer. The clustering task and the target task are jointly trained and optimized to benefit each other, leading to significant effectiveness improvement. In addition, our method groups the words with strong dependencies into the same cluster and performs the attention mechanism for each cluster independently, which improves the efficiency. We verified our method on machine translation, text classification, natural language inference, and text matching tasks. Experimental results show that our method outperforms two typical sparse attention methods, Reformer and Routing Transformer while having a comparable or even better time and memory efficiency.
2021
基于多层次预训练策略和多任务学习的端到端蒙汉语音翻译(End-to-end Mongolian-Chinese Speech Translation Based on Multi-level Pre-training Strategies and Multi-task Learning)
Ningning Wang (王宁宁)
|
Long Fei (飞龙)
|
Hui Zhang (张晖)
Proceedings of the 20th Chinese National Conference on Computational Linguistics
端到端语音翻译将源语言语音直接翻译为目标语言文本,它需要“源语言语音-目标语言文本”作为训练数据,然而这类数据极其稀缺,本文提出了一种多层次预训练策略和多任务学习相结合的训练方法,首先分别对语音识别和机器翻译模型的各个模块进行多层次预训练,接着将语音识别和机器翻译模型连接起来构成语音翻译模型,然后使用迁移学习对预训练好的模型进行多步骤微调,在此过程中又运用多任务学习的方法,将语音识别作为语音翻译的一个辅助任务来组织训练,充分利用了已经存在的各种不同形式的数据来训练端到端模型,首次将端到端技术应用于资源受限条件下的蒙汉语音翻译,构建了首个翻译质量较高、实际可用的端到端蒙汉语音翻译系统。
Search
Co-authors
- Guobing Gan 1
- Peng Zhang 1
- Shuai Zhang 1
- Junqiu Wei 1
- Qun Liu 1
- show all...