Zero- and Few-Shots Knowledge Graph Triplet Extraction with Large Language Models

Andrea Papaluca, Daniel Krefl, Sergio Rodríguez Méndez, Artem Lensky, Hanna Suominen


Abstract
In this work, we tested the Triplet Extraction (TE) capabilities of a variety of Large Language Models (LLMs) of different sizes in the Zero- and Few-Shots settings. In detail, we proposed a pipeline that dynamically gathers contextual information from a Knowledge Base (KB), both in the form of context triplets and of (sentence, triplets) pairs as examples, and provides it to the LLM through a prompt. The additional context allowed the LLMs to be competitive with all the older fully trained baselines based on the Bidirectional Long Short-Term Memory (BiLSTM) Network architecture. We further conducted a detailed analysis of the quality of the gathered KB context, finding it to be strongly correlated with the final TE performance of the model. In contrast, the size of the model appeared to only logarithmically improve the TE capabilities of the LLMs. We release the code on GitHub for reproducibility.
Anthology ID:
2024.kallm-1.2
Volume:
Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Russa Biswas, Lucie-Aimée Kaffee, Oshin Agarwal, Pasquale Minervini, Sameer Singh, Gerard de Melo
Venues:
KaLLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12–23
Language:
URL:
https://aclanthology.org/2024.kallm-1.2
DOI:
10.18653/v1/2024.kallm-1.2
Bibkey:
Cite (ACL):
Andrea Papaluca, Daniel Krefl, Sergio Rodríguez Méndez, Artem Lensky, and Hanna Suominen. 2024. Zero- and Few-Shots Knowledge Graph Triplet Extraction with Large Language Models. In Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024), pages 12–23, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Zero- and Few-Shots Knowledge Graph Triplet Extraction with Large Language Models (Papaluca et al., KaLLM-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2024.kallm-1.2.pdf