Wenchang Ma
2021
CR-Walker: Tree-Structured Graph Reasoning and Dialog Acts for Conversational Recommendation
Wenchang Ma
|
Ryuichi Takanobu
|
Minlie Huang
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Growing interests have been attracted in Conversational Recommender Systems (CRS), which explore user preference through conversational interactions in order to make appropriate recommendation. However, there is still a lack of ability in existing CRS to (1) traverse multiple reasoning paths over background knowledge to introduce relevant items and attributes, and (2) arrange selected entities appropriately under current system intents to control response generation. To address these issues, we propose CR-Walker in this paper, a model that performs tree-structured reasoning on a knowledge graph, and generates informative dialog acts to guide language generation. The unique scheme of tree-structured reasoning views the traversed entity at each hop as part of dialog acts to facilitate language generation, which links how entities are selected and expressed. Automatic and human evaluations show that CR-Walker can arrive at more accurate recommendation, and generate more informative and engaging responses.
Extract, Denoise and Enforce: Evaluating and Improving Concept Preservation for Text-to-Text Generation
Yuning Mao
|
Wenchang Ma
|
Deren Lei
|
Jiawei Han
|
Xiang Ren
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Prior studies on text-to-text generation typically assume that the model could figure out what to attend to in the input and what to include in the output via seq2seq learning, with only the parallel training data and no additional guidance. However, it remains unclear whether current models can preserve important concepts in the source input, as seq2seq learning does not have explicit focus on the concepts and commonly used evaluation metrics also treat them equally important as other tokens. In this paper, we present a systematic analysis that studies whether current seq2seq models, especially pre-trained language models, are good enough for preserving important input concepts and to what extent explicitly guiding generation with the concepts as lexical constraints is beneficial. We answer the above questions by conducting extensive analytical experiments on four representative text-to-text generation tasks. Based on the observations, we then propose a simple yet effective framework to automatically extract, denoise, and enforce important input concepts as lexical constraints. This new method performs comparably or better than its unconstrained counterpart on automatic metrics, demonstrates higher coverage for concept preservation, and receives better ratings in the human evaluation. Our code is available at https://github.com/morningmoni/EDE.
Search
Co-authors
- Ryuichi Takanobu 1
- Minlie Huang 1
- Yuning Mao 1
- Deren Lei 1
- Jiawei Han 1
- show all...