Sheng Guo
2022
RGL: A Simple yet Effective Relation Graph Augmented Prompt-based Tuning Approach for Few-Shot Learning
Yaqing Wang
|
Xin Tian
|
Haoyi Xiong
|
Yueyang Li
|
Zeyu Chen
|
Sheng Guo
|
Dejing Dou
Findings of the Association for Computational Linguistics: NAACL 2022
Pre-trained language models (PLMs) can provide a good starting point for downstream applications. However, it is difficult to generalize PLMs to new tasks given a few labeled samples. In this work, we show that Relation Graph augmented Learning (RGL) can improve the performance of few-shot natural language understanding tasks. During learning, RGL constructs a relation graph based on the label consistency between samples in the same batch, and learns to solve the resultant node classification and link prediction problems on the relation graph. In this way, RGL fully exploits the limited supervised information, which can boost the tuning effectiveness. Extensive experimental results show that RGL consistently improves the performance of prompt-based tuning strategies.
2020
Exploring Contextual Word-level Style Relevance for Unsupervised Style Transfer
Chulun Zhou
|
Liangyu Chen
|
Jiachen Liu
|
Xinyan Xiao
|
Jinsong Su
|
Sheng Guo
|
Hua Wu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Unsupervised style transfer aims to change the style of an input sentence while preserving its original content without using parallel training data. In current dominant approaches, owing to the lack of fine-grained control on the influence from the target style, they are unable to yield desirable output sentences. In this paper, we propose a novel attentional sequence-to-sequence (Seq2seq) model that dynamically exploits the relevance of each output word to the target style for unsupervised style transfer. Specifically, we first pretrain a style classifier, where the relevance of each input word to the original style can be quantified via layer-wise relevance propagation. In a denoising auto-encoding manner, we train an attentional Seq2seq model to reconstruct input sentences and repredict word-level previously-quantified style relevance simultaneously. In this way, this model is endowed with the ability to automatically predict the style relevance of each output word. Then, we equip the decoder of this model with a neural style component to exploit the predicted wordlevel style relevance for better style transfer. Particularly, we fine-tune this model using a carefully-designed objective function involving style transfer, style relevance consistency, content preservation and fluency modeling loss terms. Experimental results show that our proposed model achieves state-of-the-art performance in terms of both transfer accuracy and content preservation.
2010
Finding the Storyteller: Automatic Spoiler Tagging using Linguistic Cues
Sheng Guo
|
Naren Ramakrishnan
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)
Search
Co-authors
- Yaqing Wang 1
- Xin Tian 1
- Haoyi Xiong 1
- Yueyang Li 1
- Zeyu Chen 1
- show all...