Medical Knowledge-enriched Textual Entailment Framework

Shweta Yadav, Vishal Pallagani, Amit Sheth


Abstract
One of the cardinal tasks in achieving robust medical question answering systems is textual entailment. The existing approaches make use of an ensemble of pre-trained language models or data augmentation, often to clock higher numbers on the validation metrics. However, two major shortcomings impede higher success in identifying entailment: (1) understanding the focus/intent of the question and (2) ability to utilize the real-world background knowledge to capture the con-text beyond the sentence. In this paper, we present a novel Medical Knowledge-Enriched Textual Entailment framework that allows the model to acquire a semantic and global representation of the input medical text with the help of a relevant domain-specific knowledge graph. We evaluate our framework on the benchmark MEDIQA-RQE dataset and manifest that the use of knowledge-enriched dual-encoding mechanism help in achieving an absolute improvement of 8.27% over SOTA language models.
Anthology ID:
2020.coling-main.161
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
1795–1801
Language:
URL:
https://aclanthology.org/2020.coling-main.161
DOI:
10.18653/v1/2020.coling-main.161
Bibkey:
Cite (ACL):
Shweta Yadav, Vishal Pallagani, and Amit Sheth. 2020. Medical Knowledge-enriched Textual Entailment Framework. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1795–1801, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Medical Knowledge-enriched Textual Entailment Framework (Yadav et al., COLING 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2020.coling-main.161.pdf