Shudong Liu


2024

pdf
Domain-Aware k-Nearest-Neighbor Knowledge Distillation for Machine Translation
Zhexuan Wang | Shudong Liu | Xuebo Liu | Miao Zhang | Derek Wong | Min Zhang
Findings of the Association for Computational Linguistics ACL 2024

kNN-MT has utilized neighborhood knowledge for auxiliary decoding, significantly improving translation performance. Subsequently, kNN-KD transitions the use of neighborhood knowledge from the decoding phase to the training phase, to address the temporal and spatial inefficiencies inherent in kNN-MT. However, kNN-KD transfers all the kNN knowledge arbitrarily, which has the potential to restrict the learning of student models. In this paper, we propose a novel domain-aware kNN-KD method, which filters out domain-relevant neighborhood knowledge for learning in the distillation process. Notably, this entire process exclusively utilizes the neighborhood knowledge of the original model, eliminating the need for establishing any additional datastores. Experiments on four domain translation tasks demonstrate that our method achieves state-of-the-art performance, realizing an average gain of 1.55 COMET and 1.42 BLEU scores, by further enhancing the translation of rare words. Source code can be accessed at https://github.com/wangzx1219/Dk-KD.

2023

pdf
Can LMs Generalize to Future Data? An Empirical Analysis on Text Summarization
Chi Cheang | Hou Chan | Derek Wong | Xuebo Liu | Zhaocong Li | Yanming Sun | Shudong Liu | Lidia Chao
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Recent pre-trained language models (PLMs) achieve promising results in existing abstractive summarization datasets. However, existing summarization benchmarks overlap in time with the standard pre-training corpora and finetuning datasets. Hence, the strong performance of PLMs may rely on the parametric knowledge that is memorized during pre-training and fine-tuning. Moreover, the knowledge memorized by PLMs may quickly become outdated, which affects the generalization performance of PLMs on future data. In this work, we propose TempoSum, a novel benchmark that contains data samples from 2010 to 2022, to understand the temporal generalization ability of abstractive summarization models. Through extensive human evaluation, we show that parametric knowledge stored in summarization models significantly affects the faithfulness of the generated summaries on future data. Moreover, existing faithfulness enhancement methods cannot reliably improve the faithfulness of summarization models on future data. Finally, we discuss several recommendations to the research community on how to evaluate and improve the temporal generalization capability of text summarization models.

pdf
kNN-TL: k-Nearest-Neighbor Transfer Learning for Low-Resource Neural Machine Translation
Shudong Liu | Xuebo Liu | Derek F. Wong | Zhaocong Li | Wenxiang Jiao | Lidia S. Chao | Min Zhang
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Transfer learning has been shown to be an effective technique for enhancing the performance of low-resource neural machine translation (NMT). This is typically achieved through either fine-tuning a child model with a pre-trained parent model, or by utilizing the out- put of the parent model during the training of the child model. However, these methods do not make use of the parent knowledge during the child inference, which may limit the translation performance. In this paper, we propose a k-Nearest-Neighbor Transfer Learning (kNN-TL) approach for low-resource NMT, which leverages the parent knowledge throughout the entire developing process of the child model. Our approach includes a parent-child representation alignment method, which ensures consistency in the output representations between the two models, and a child-aware datastore construction method that improves inference efficiency by selectively distilling the parent datastore based on relevance to the child model. Experimental results on four low-resource translation tasks show that kNN-TL outperforms strong baselines. Extensive analyses further demonstrate the effectiveness of our approach. Code and scripts are freely available at https://github.com/NLP2CT/kNN-TL.

2021

pdf
面向中文口语理解的基于依赖引导的字特征槽填充模型(A Dependency-Guided Character-Based Slot Filling Model for Chinese Spoken Language Understanding)
Zhanbiao Zhu (朱展标) | Peijie Huang (黄沛杰) | Yexing Zhang (张业兴) | Shudong Liu (刘树东) | Hualin Zhang (张华林) | Junyao Huang (黄均曜)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

意图识别和槽信息填充的联合模型将口语理解技术(Spoken Language Understanding)提升到了一个新的水平,但由于存在出现频率低或未见过的槽指称项(0 shot slot mentions),模型的序列标注性能受限,而且这些联合模型往往没有利用输入序列存在的语法知识信息。已有研究表明序列标注任务可以通过引入依赖树结构,辅助推断序列标注中槽的存在。在中文口语对话理解中,由于中文话语是一串字序列组成,输入话语的字和槽信息是一一对应的,因而槽信息填充模型往往是字特征模型。基于词的依赖树结构无法直接应用于基于字特征的槽填充模型。为了解决字词之间的矛盾,本文提出了一种基于字模型的依赖引导槽填充模型(dependency guided character-based slot filling model,DCSF),提供了一种简洁的方法解决将词级依赖树结构引入中文字特征模型的冲突,同时通过对话语中词汇内部关系进行建模,保留了词级上下文信息和分词信息。在公共基准语料库当SMP-ECDT和CrossWOZ上的实验结果表明,我们的模型优于比较模型,特别是在未见过的槽指称项和低资源情况下有很大的改进。

pdf
结合边界预测和动态模板方法的槽填充模型(Slot Filling Model with Boundary Prediction and Dynamic Template)
Zhanbiao Zhu (朱展标) | Peijie Huang (黄沛杰) | Yexing Zhang (张业兴) | Shudong Liu (刘树东) | Hualin Zhang (张华林) | Junyao Huang (黄均曜)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

意图识别和槽信息填充的联合模型将口语理解技术(Spoken language understandingSLU)提升到了一个新的水平,但是目前研究进展的模型通过话语上下文信息判断位置信息,缺少对槽信息标签之间位置信息的考虑,导致模型在槽位提取过程中容易发生边界错误,进而影响最终槽位提取表现。而且在槽信息提取任务中,槽指称项(Slot mentions)可能与正常表述话语并没有区别,特别是电影名字、歌曲名字等,模型容易受到槽指称项话语的干扰,因而无法在槽位提取中正确识别槽位边界。本文提出了一种面向口语理解的结合边界预测和动态模板的槽填充(Boundary-predictionand Dynamic-template Slot Filling BDSF)模型。该模型提供了一种联合预测边界信息的辅助任务,将位置信息引入到槽信息填充中,同时利用动态模版机制对话语句式建模,能够让模型聚焦于话语中的非槽指称项部分,避免了模型被槽指称项干扰,增强模型区分槽位边界的能力。在公共基准语料库CAIS和SMP-ECDT上的实验结果表明,我们的模型优于比较模型,特别是能够为槽标签预测模型提供准确的位置信息。