Junsheng Kong


2022

pdf
Mitigating Contradictions in Dialogue Based on Contrastive Learning
Weizhao Li | Junsheng Kong | Ben Liao | Yi Cai
Findings of the Association for Computational Linguistics: ACL 2022

Chatbot models have achieved remarkable progress in recent years but tend to yield contradictory responses. In this paper, we exploit the advantage of contrastive learning technique to mitigate this issue. To endow the model with the ability of discriminating contradictory patterns, we minimize the similarity between the target response and contradiction related negative example. The negative example is generated with learnable latent noise, which receives contradiction related feedback from the pretrained critic. Experimental results show that our method helps to avoid contradictions in response generation while preserving response fluency, outperforming existing methods on both automatic and human evaluation.

2020

pdf
TSDG: Content-aware Neural Response Generation with Two-stage Decoding Process
Junsheng Kong | Zhicheng Zhong | Yi Cai | Xin Wu | Da Ren
Findings of the Association for Computational Linguistics: EMNLP 2020

Neural response generative models have achieved remarkable progress in recent years but tend to yield irrelevant and uninformative responses. One of the reasons is that encoder-decoder based models always use a single decoder to generate a complete response at a stroke. This tends to generate high-frequency function words with less semantic information rather than low-frequency content words with more semantic information. To address this issue, we propose a content-aware model with two-stage decoding process named Two-stage Dialogue Generation (TSDG). We separate the decoding process of content words and function words so that content words can be generated independently without the interference of function words. Experimental results on two datasets indicate that our model significantly outperforms several competitive generative models in terms of automatic and human evaluation.