2024
pdf
abs
Context Aggregation with Topic-focused Summarization for Personalized Medical Dialogue Generation
Zhengyuan Liu
|
Siti Salleh
|
Pavitra Krishnaswamy
|
Nancy Chen
Proceedings of the 6th Clinical Natural Language Processing Workshop
In the realm of dialogue systems, generated responses often lack personalization. This is particularly true in the medical domain, where research is limited by scarce available domain-specific data and the complexities of modeling medical context and persona information. In this work, we investigate the potential of harnessing large language models for personalized medical dialogue generation. In particular, to better aggregate the long conversational context, we adopt topic-focused summarization to distill core information from the dialogue history, and use such information to guide the conversation flow and generated content. Drawing inspiration from real-world telehealth conversations, we outline a comprehensive pipeline encompassing data processing, profile construction, and domain adaptation. This work not only highlights our technical approach but also shares distilled insights from the data preparation and model construction phases.
2023
pdf
abs
Joint Dialogue Topic Segmentation and Categorization: A Case Study on Clinical Spoken Conversations
Zhengyuan Liu
|
Siti Umairah Md Salleh
|
Hong Choon Oh
|
Pavitra Krishnaswamy
|
Nancy Chen
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track
Utilizing natural language processing techniques in clinical conversations is effective to improve the efficiency of health management workflows for medical staff and patients. Dialogue segmentation and topic categorization are two fundamental steps for processing verbose spoken conversations and highlighting informative spans for downstream tasks. However, in practical use cases, due to the variety of segmentation granularity and topic definition, and the lack of diverse annotated corpora, no generic models are readily applicable for domain-specific applications. In this work, we introduce and adopt a joint model for dialogue segmentation and topic categorization, and conduct a case study on healthcare follow-up calls for diabetes management; we provide insights from both data and model perspectives toward performance and robustness.
2021
pdf
Analyzing Code Embeddings for Coding Clinical Narratives
Wei Shi
|
Jiewen Wu
|
Xiwen Yang
|
Nancy Chen
|
Ivan Ho Mien
|
Jung-Jae Kim
|
Pavitra Krishnaswamy
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
2020
pdf
abs
Uncertainty Modeling for Machine Comprehension Systems using Efficient Bayesian Neural Networks
Zhengyuan Liu
|
Pavitra Krishnaswamy
|
Ai Ti Aw
|
Nancy Chen
Proceedings of the 28th International Conference on Computational Linguistics: Industry Track
While neural approaches have achieved significant improvement in machine comprehension tasks, models often work as a black-box, resulting in lower interpretability, which requires special attention in domains such as healthcare or education. Quantifying uncertainty helps pave the way towards more interpretable neural networks. In classification and regression tasks, Bayesian neural networks have been effective in estimating model uncertainty. However, inference time increases linearly due to the required sampling process in Bayesian neural networks. Thus speed becomes a bottleneck in tasks with high system complexity such as question-answering or dialogue generation. In this work, we propose a hybrid neural architecture to quantify model uncertainty using Bayesian weight approximation but boosts up the inference speed by 80% relative at test time, and apply it for a clinical dialogue comprehension task. The proposed approach is also used to enable active learning so that an updated model can be trained more optimally with new incoming data by selecting samples that are not well-represented in the current training scheme.
2019
pdf
abs
Fast Prototyping a Dialogue Comprehension System for Nurse-Patient Conversations on Symptom Monitoring
Zhengyuan Liu
|
Hazel Lim
|
Nur Farah Ain Suhaimi
|
Shao Chuen Tong
|
Sharon Ong
|
Angela Ng
|
Sheldon Lee
|
Michael R. Macdonald
|
Savitha Ramasamy
|
Pavitra Krishnaswamy
|
Wai Leng Chow
|
Nancy F. Chen
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers)
Data for human-human spoken dialogues for research and development are currently very limited in quantity, variety, and sources; such data are even scarcer in healthcare. In this work, we investigate fast prototyping of a dialogue comprehension system by leveraging on minimal nurse-to-patient conversations. We propose a framework inspired by nurse-initiated clinical symptom monitoring conversations to construct a simulated human-human dialogue dataset, embodying linguistic characteristics of spoken interactions like thinking aloud, self-contradiction, and topic drift. We then adopt an established bidirectional attention pointer network on this simulated dataset, achieving more than 80% F1 score on a held-out test set from real-world nurse-to-patient conversations. The ability to automatically comprehend conversations in the healthcare domain by exploiting only limited data has implications for improving clinical workflows through red flag symptom detection and triaging capabilities. We demonstrate the feasibility for efficient and effective extraction, retrieval and comprehension of symptom checking information discussed in multi-turn human-human spoken conversations.
2018
pdf
abs
Attention-based Semantic Priming for Slot-filling
Jiewen Wu
|
Rafael E. Banchs
|
Luis Fernando D’Haro
|
Pavitra Krishnaswamy
|
Nancy Chen
Proceedings of the Seventh Named Entities Workshop
The problem of sequence labelling in language understanding would benefit from approaches inspired by semantic priming phenomena. We propose that an attention-based RNN architecture can be used to simulate semantic priming for sequence labelling. Specifically, we employ pre-trained word embeddings to characterize the semantic relationship between utterances and labels. We validate the approach using varying sizes of the ATIS and MEDIA datasets, and show up to 1.4-1.9% improvement in F1 score. The developed framework can enable more explainable and generalizable spoken language understanding systems.