Zefeng Cai
Also published as: ZeFeng Cai
2022
Multi-Scale Distribution Deep Variational Autoencoder for Explanation Generation
ZeFeng Cai
|
Linlin Wang
|
Gerard de Melo
|
Fei Sun
|
Liang He
Findings of the Association for Computational Linguistics: ACL 2022
Generating explanations for recommender systems is essential for improving their transparency, as users often wish to understand the reason for receiving a specified recommendation. Previous methods mainly focus on improving the generation quality, but often produce generic explanations that fail to incorporate user and item specific details. To resolve this problem, we present Multi-Scale Distribution Deep Variational Autoencoders (MVAE).These are deep hierarchical VAEs with a prior network that eliminates noise while retaining meaningful signals in the input, coupled with a recognition network serving as the source of information to guide the learning of the prior network. Further, the Multi-scale distribution Learning Framework (MLF) along with a Target Tracking Kullback-Leibler divergence (TKL) mechanism are proposed to employ multi KL divergences at different scales for more effective learning. Extensive empirical experiments demonstrate that our methods can generate explanations with concrete input-specific contents.
STAR: SQL Guided Pre-Training for Context-dependent Text-to-SQL Parsing
Zefeng Cai
|
Xiangyu Li
|
Binyuan Hui
|
Min Yang
|
Bowen Li
|
Binhua Li
|
Zheng Cao
|
Weijie Li
|
Fei Huang
|
Luo Si
|
Yongbin Li
Findings of the Association for Computational Linguistics: EMNLP 2022
In this paper, we propose a novel SQL guided pre-training framework STAR for context-dependent text-to-SQL parsing, which leverages contextual information to enrich natural language (NL) utterance and table schema representations for text-to-SQL conversations. Concretely, we propose two novel pre-training objectives which respectively explore the context-dependent interactions of NL utterances and SQL queries within each text-to-SQL conversation: (i) schema state tracking (SST) objective that tracks and explores the schema states of context-dependent SQL queries in the form of schema-states by predicting and updating the value of each schema slot during interaction; (ii) utterance dependency tracking (UDT) objective that employs weighted contrastive learning to pull together two semantically similar NL utterances and push away the representations of semantically dissimilar NL utterances within each conversation. In addition, we construct a high-quality large-scale context-dependent text-to-SQL conversation corpus to pre-train STAR. Extensive experiments show that STAR achieves new state-of-the-art performance on two downstream benchmarks (SParC and CoSQL), significantly outperforming previous pre-training methods and ranking first on the leaderboard. We believe the release of the constructed corpus, codebase and pre-trained STAR checkpoints would push forward the research in this area.
Search
Co-authors
- Linlin Wang 1
- Gerard De Melo 1
- Fei Sun 1
- Liang He 1
- Xiangyu Li 1
- show all...