Yuqiang Han
2022
DialMed: A Dataset for Dialogue-based Medication Recommendation
Zhenfeng He
|
Yuqiang Han
|
Zhenqiu Ouyang
|
Wei Gao
|
Hongxu Chen
|
Guandong Xu
|
Jian Wu
Proceedings of the 29th International Conference on Computational Linguistics
Medication recommendation is a crucial task for intelligent healthcare systems. Previous studies mainly recommend medications with electronic health records (EHRs). However, some details of interactions between doctors and patients may be ignored or omitted in EHRs, which are essential for automatic medication recommendation. Therefore, we make the first attempt to recommend medications with the conversations between doctors and patients. In this work, we construct DIALMED, the first high-quality dataset for medical dialogue-based medication recommendation task. It contains 11, 996 medical dialogues related to 16 common diseases from 3 departments and 70 corresponding common medications. Furthermore, we propose a Dialogue structure and Disease knowledge aware Network (DDN), where a QA Dialogue Graph mechanism is designed to model the dialogue structure and the knowledge graph is used to introduce external disease knowledge. The extensive experimental results demonstrate that the proposed method is a promising solution to recommend medications with medical dialogues. The dataset and code are available at https://github.com/f-window/DialMed.
Deep Reinforcement Learning for Entity Alignment
Lingbing Guo
|
Yuqiang Han
|
Qiang Zhang
|
Huajun Chen
Findings of the Association for Computational Linguistics: ACL 2022
Embedding-based methods have attracted increasing attention in recent entity alignment (EA) studies. Although great promise they can offer, there are still several limitations. The most notable is that they identify the aligned entities based on cosine similarity, ignoring the semantics underlying the embeddings themselves. Furthermore, these methods are shortsighted, heuristically selecting the closest entity as the target and allowing multiple entities to match the same candidate. To address these limitations, we model entity alignment as a sequential decision-making task, in which an agent sequentially decides whether two entities are matched or mismatched based on their representation vectors. The proposed reinforcement learning (RL)-based entity alignment framework can be flexibly adapted to most embedding-based EA methods. The experimental results demonstrate that it consistently advances the performance of several state-of-the-art methods, with a maximum improvement of 31.1% on Hits@1.
Search
Co-authors
- Zhenfeng He 1
- Zhenqiu Ouyang 1
- Wei Gao 1
- Hongxu Chen 1
- Guandong Xu 1
- show all...