Yao Zhang
2021
Semi-supervised Intent Discovery with Contrastive Learning
Xiang Shen
|
Yinge Sun
|
Yao Zhang
|
Mani Najmabadi
Proceedings of the 3rd Workshop on Natural Language Processing for Conversational AI
User intent discovery is a key step in developing a Natural Language Understanding (NLU) module at the core of any modern Conversational AI system. Typically, human experts review a representative sample of user input data to discover new intents, which is subjective, costly, and error-prone. In this work, we aim to assist the NLU developers by presenting a novel method for discovering new intents at scale given a corpus of utterances. Our method utilizes supervised contrastive learning to leverage information from a domain-relevant, already labeled dataset and identifies new intents in the corpus at hand using unsupervised K-means clustering. Our method outperforms the state-of-the-art by a large margin up to 2% and 13% on two benchmark datasets, measured by clustering accuracy. Furthermore, we apply our method on a large dataset from the travel domain to demonstrate its effectiveness on a real-world use case.
GMH: A General Multi-hop Reasoning Model for KG Completion
Yao Zhang
|
Hongru Liang
|
Adam Jatowt
|
Wenqiang Lei
|
Xin Wei
|
Ning Jiang
|
Zhenglu Yang
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Knowledge graphs are essential for numerous downstream natural language processing applications, but are typically incomplete with many facts missing. This results in research efforts on multi-hop reasoning task, which can be formulated as a search process and current models typically perform short distance reasoning. However, the long-distance reasoning is also vital with the ability to connect the superficially unrelated entities. To the best of our knowledge, there lacks a general framework that approaches multi-hop reasoning in mixed long-short distance reasoning scenarios. We argue that there are two key issues for a general multi-hop reasoning model: i) where to go, and ii) when to stop. Therefore, we propose a general model which resolves the issues with three modules: 1) the local-global knowledge module to estimate the possible paths, 2) the differentiated action dropout module to explore a diverse set of paths, and 3) the adaptive stopping search module to avoid over searching. The comprehensive results on three datasets demonstrate the superiority of our model with significant improvements against baselines in both short and long distance reasoning scenarios.
Search
Co-authors
- Xiang Shen 1
- Yinge Sun 1
- Mani Najmabadi 1
- Hongru Liang 1
- Adam Jatowt 1
- show all...