Xiangyu Dong
2021
Logic-Consistency Text Generation from Semantic Parses
Chang Shu
|
Yusen Zhang
|
Xiangyu Dong
|
Peng Shi
|
Tao Yu
|
Rui Zhang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Injecting Entity Types into Entity-Guided Text Generation
Xiangyu Dong
|
Wenhao Yu
|
Chenguang Zhu
|
Meng Jiang
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Recent successes in deep generative modeling have led to significant advances in natural language generation (NLG). Incorporating entities into neural generation models has demonstrated great improvements by assisting to infer the summary topic and to generate coherent content. To enhance the role of entity in NLG, in this paper, we aim to model the entity type in the decoding phase to generate contextual words accurately. We develop a novel NLG model to produce a target sequence based on a given list of entities. Our model has a multi-step decoder that injects the entity types into the process of entity mention generation. Experiments on two public news datasets demonstrate type injection performs better than existing type embedding concatenation baselines.
2020
MedDialog: Large-scale Medical Dialogue Datasets
Guangtao Zeng
|
Wenmian Yang
|
Zeqian Ju
|
Yue Yang
|
Sicheng Wang
|
Ruisi Zhang
|
Meng Zhou
|
Jiaqi Zeng
|
Xiangyu Dong
|
Ruoyu Zhang
|
Hongchao Fang
|
Penghui Zhu
|
Shu Chen
|
Pengtao Xie
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Medical dialogue systems are promising in assisting in telemedicine to increase access to healthcare services, improve the quality of patient care, and reduce medical costs. To facilitate the research and development of medical dialogue systems, we build large-scale medical dialogue datasets – MedDialog, which contain 1) a Chinese dataset with 3.4 million conversations between patients and doctors, 11.3 million utterances, 660.2 million tokens, covering 172 specialties of diseases, and 2) an English dataset with 0.26 million conversations, 0.51 million utterances, 44.53 million tokens, covering 96 specialties of diseases. To our best knowledge, MedDialog is the largest medical dialogue dataset to date. We pretrain several dialogue generation models on the Chinese MedDialog dataset, including Transformer, GPT, BERT-GPT, and compare their performance. It is shown that models trained on MedDialog are able to generate clinically correct and doctor-like medical dialogues. We also study the transferability of models trained on MedDialog to low-resource medical dialogue generation tasks. It is shown that via transfer learning which finetunes the models pretrained on MedDialog, the performance on medical dialogue generation tasks with small datasets can be greatly improved, as shown in human evaluation and automatic evaluation. The datasets and code are available at https://github.com/UCSD-AI4H/Medical-Dialogue-System