Hsin Yang
2026
CoVaPh: A Vision-Language Multi-Agent Dialogue System for Tool-Augmented Pharmacogenetic Reasoning and Personalized Guidance
Shang-Chun Luke Lu | Hsin Yang | Hui-Hsin Xue | Ping Lin Tsai | Yu Jing Weng | Shiou-Chi Li | Jen-Wei Huang | Hui Hua Chang
Proceedings of the 16th International Workshop on Spoken Dialogue System Technology
Shang-Chun Luke Lu | Hsin Yang | Hui-Hsin Xue | Ping Lin Tsai | Yu Jing Weng | Shiou-Chi Li | Jen-Wei Huang | Hui Hua Chang
Proceedings of the 16th International Workshop on Spoken Dialogue System Technology
The post-pandemic healthcare labor crisis has intensified the demand for accessible, high-precision pharmaceutical care. To meet this challenge, we introduce CoVaPh, a multi-agent pharmacogenetic framework that integrates information retrieval with Large Language Model (LLM) and Vision-Language Model (VLM) technologies. At its core, a fine-tuned query rewriting module transforms clinical inquiries into structured search indices, ensuring precise multimodal retrieval from CPIC and PharmGKB while mitigating hallucination risks. By synthesizing structured API data with unstructured evidence from guidelines, our framework delivers highly reliable, context-aware responses, surpassing benchmarks by 10% on expert-curated datasets. This approach provides a scalable solution to alleviate clinical workloads and democratize access to specialized medical knowledge.