Rakesh Shiradkar
2025
Automated Knowledge Graph Construction using Large Language Models and Sentence Complexity Modelling
Sydney Anuyah
|
Mehedi Mahmud Kaushik
|
Sri Rama Krishna Reddy Dwarampudi
|
Rakesh Shiradkar
|
Arjan Durresi
|
Sunandan Chakraborty
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
We introduce CoDe-KG, an open-source, end-to-end pipeline for extracting sentence-level knowledge graphs by combining robust coreference resolution with syntactic sentence decomposition. Using our model, we contribute a dataset of over 150 000 knowledge triples, which is open source. We also contribute a training corpus of 7248 rows for sentence complexity, 200 rows of gold human annotations for coreference resolution using lung-cancer abstracts from PubMed, 900 rows of gold human annotations for sentence conversion policies from sentences in the abstract, and 398 triples of gold human annotations. We systematically select optimal prompt-model pairs across five complexity categories, showing that hybrid chain-of-thought and few-shot prompting yields up to 99.8% exact-match accuracy on sentence simplification. On relation extraction (RE), our pipeline achieves 65.8% macro-F1 on REBEL, an 8-point gain over the prior state of the art, and 75.7% micro-F1 on WebNLG2, while matching or exceeding performance on Wiki-NRE and CaRB. Ablation studies demonstrate that integrating coreference and decomposition increases recall on rare relations by over 20%