Jinzheng Zhao
2023
Generative Zero-Shot Prompt Learning for Cross-Domain Slot Filling with Inverse Prompting
Xuefeng Li
|
Liwen Wang
|
Guanting Dong
|
Keqing He
|
Jinzheng Zhao
|
Hao Lei
|
Jiachi Liu
|
Weiran Xu
Findings of the Association for Computational Linguistics: ACL 2023
Zero-shot cross-domain slot filling aims to transfer knowledge from the labeled source domain to the unlabeled target domain. Existing models either encode slot descriptions and examples or design handcrafted question templates using heuristic rules, suffering from poor generalization capability or robustness. In this paper, we propose a generative zero-shot prompt learning framework for cross-domain slot filling, both improving generalization and robustness than previous work. Besides, we introduce a novel inverse prompting strategy to distinguish different slot types to avoid the multiple prediction problem, and an efficient prompt tuning strategy to boost higher performance only training fewer prompt parameters. Experiments and analysis demonstrate the effectiveness of our proposed framework, especially huge improvements (+13.44% F1) on the unseen slots.
2022
PSSAT: A Perturbed Semantic Structure Awareness Transferring Method for Perturbation-Robust Slot Filling
Guanting Dong
|
Daichi Guo
|
Liwen Wang
|
Xuefeng Li
|
Zechen Wang
|
Chen Zeng
|
Keqing He
|
Jinzheng Zhao
|
Hao Lei
|
Xinyue Cui
|
Yi Huang
|
Junlan Feng
|
Weiran Xu
Proceedings of the 29th International Conference on Computational Linguistics
Most existing slot filling models tend to memorize inherent patterns of entities and corresponding contexts from training data. However, these models can lead to system failure or undesirable outputs when being exposed to spoken language perturbation or variation in practice. We propose a perturbed semantic structure awareness transferring method for training perturbation-robust slot filling models. Specifically, we introduce two MLM-based training strategies to respectively learn contextual semantic structure and word distribution from unsupervised language perturbation corpus. Then, we transfer semantic knowledge learned from upstream training procedure into the original samples and filter generated data by consistency processing. These procedures aims to enhance the robustness of slot filling models. Experimental results show that our method consistently outperforms the previous basic methods and gains strong generalization while preventing the model from memorizing inherent patterns of entities and contexts.
Search
Co-authors
- Guanting Dong 2
- Liwen Wang 2
- Xuefeng Li 2
- Keqing He 2
- Hao Lei 2
- show all...