Xu Guo


2022

pdf
Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation
Xu Guo | Boyang Li | Han Yu
Findings of the Association for Computational Linguistics: EMNLP 2022

Prompt tuning, or the conditioning of a frozen pretrained language model (PLM) with soft prompts learned from data, has demonstrated impressive performance on a wide range of NLP tasks. However, prompt tuning requires a large training dataset to be effective and is outperformed by finetuning the entire PLM in data-scarce regimes. Previous work (Gu et al., 2022, Vu et al., 2022) proposed to transfer soft prompts pretrained on the source domain to the target domain. In this paper, we explore domain adaptation for prompt tuning, a problem setting where unlabeled data from the target domain are available during pretraining. We propose bOosting Prompt TunIng with doMain Adaptation (OPTIMA), which regularizes the decision boundary to be smooth around regions where source and target data distributions are similar. Extensive experiments demonstrate that OPTIMA significantly enhances the transferability and sample-efficiency of prompt tuning compared to strong baselines. Moreover, in few-shot settings, OPTIMA exceeds full-model tuning by a large margin.

pdf
基于多源知识融合的领域情感词典表示学习研究(Domain Sentiment Lexicon Representation Learning Based on Multi-source Knowledge Fusion)
Ruihua Qi (祁瑞华) | Jia Wei (魏佳) | Zhen Shao (邵震) | Xu Guo (郭旭) | Heng Chen (陈恒)
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“本文旨在解决领域情感词典构建任务中标注数据资源相对匮乏以及情感语义表示不充分问题,通过多源数据领域差异计算联合权重,融合先验情感知识和Fasttext词向量表示学习,将情感语义知识映射到新的词向量空间,从无标注数据中自动构建适应大数据多领域和多语言环境的领域情感词典。在中英文多领域公开数据集上的对比实验表明,与情感词典方法和预训练词向量方法相比,本文提出的多源知识融合的领域情感词典表示学习方法在实验数据集上的分类正确率均有明显提升,并在多种算法、多语言、多领域和多数据集上具有较好的鲁棒性。本文还通过消融实验验证了所提出模型的各个模块在提升情感分类效果中的作用。”

2021

pdf
Latent-Optimized Adversarial Neural Transfer for Sarcasm Detection
Xu Guo | Boyang Li | Han Yu | Chunyan Miao
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

The existence of multiple datasets for sarcasm detection prompts us to apply transfer learning to exploit their commonality. The adversarial neural transfer (ANT) framework utilizes multiple loss terms that encourage the source-domain and the target-domain feature distributions to be similar while optimizing for domain-specific performance. However, these objectives may be in conflict, which can lead to optimization difficulties and sometimes diminished transfer. We propose a generalized latent optimization strategy that allows different losses to accommodate each other and improves training dynamics. The proposed method outperforms transfer learning and meta-learning baselines. In particular, we achieve 10.02% absolute performance gain over the previous state of the art on the iSarcasm dataset.