Anas Anwarul Haq Khan


2025

pdf bib
MOD-KG: MultiOrgan Diagnosis Knowledge Graph
Anas Anwarul Haq Khan | Pushpak Bhattacharyya
NLP-AI4Health

The human body is highly interconnected, where a diagnosis in one organ can influence conditions in others. In medical research, graphs (such as Knowledge Graphs and Causal Graphs) have proven useful for capturing these relationships, but constructing them manually with expert input is both costly and time-intensive, especially given the continuous flow of new findings. To address this, we leverage the extraction capabilities of large language models (LLMs) to build the **MultiOrgan Diagnosis Knowledge Graph (MOD-KG)**. MOD-KG contains over **21,200 knowledge triples**, derived from both textbooks **(~13%)** and carefully selected research papers (with an average of **444** citations each). The graph focuses primarily on the *heart, lungs, kidneys, liver, pancreas, and brain*, which are central to much of today’s multimodal imaging research. The extraction quality of the LLM was benchmarked against baselines over **1000** samples, demonstrating reliability. We will make our dataset public upon acceptance.

2024

pdf bib
Hope ‘The Paragraph Guy’ explains the rest : Introducing MeSum, the Meme Summarizer
Anas Anwarul Haq Khan | Tanik Saikh | Arpan Phukan | Asif Ekbal
Findings of the Association for Computational Linguistics: EMNLP 2024

pdf bib
Sumotosima : A Framework and Dataset for Classifying and Summarizing Otoscopic Images
Eram Anwarul Khan | Anas Anwarul Haq Khan
Proceedings of the 21st International Conference on Natural Language Processing (ICON)

Otoscopy is a diagnostic procedure to examine the ear canal and eardrum using an otoscope. It identifies conditions like infections, foreign bodies, eardrum perforations, and ear abnormalities. We propose a novel resource-efficient deep learning and transformer-based framework, Sumotosima (Summarizer for Otoscopic Images), which provides an end-to-end pipeline for classification followed by summarization. Our framework utilizes a combination of triplet and cross-entropy losses. Additionally, we use Knowledge Enhanced Multimodal BART, where the input is fused textual and image embeddings. The objective is to deliver summaries that are well-suited for patients, ensuring clarity and efficiency in understanding otoscopic images. Given the lack of existing datasets, we have curated our own OCASD (Otoscopy Classification And Summary Dataset), which includes 500 images with 5 unique categories, annotated with their class and summaries by otolaryngologists. Sumotosima achieved a result of 98.03%, which is 7.00%, 3.10%, and 3.01% higher than K-Nearest Neighbors, Random Forest, and Support Vector Machines, respectively, in classification tasks. For summarization, Sumotosima outperformed GPT-4o and LLaVA by 88.53% and 107.57% in ROUGE scores, respectively. We have made our code and dataset publicly available at https://github.com/anas2908/Sumotosima