Investigating Large Language Models for Text-to-SPARQL Generation

Jacopo D’Abramo, Andrea Zugarini, Paolo Torroni


Abstract
Large Language Models (LLMs) have demonstrated strong capabilities in code generation, such as translating natural language questions into SQL queries. However, state-of-the-art solutions often involve a costly fine-tuning step. In this study, we extensively evaluate In-Context Learning (ICL) solutions for text-to-SPARQL generation with different architectures and configurations, based on methods for retrieving relevant demonstrations for few-shot prompting and working with multiple generated hypotheses. In this way, we demonstrate that LLMs can formulate SPARQL queries achieving state-of-the-art results on several Knowledge Graph Question Answering (KGQA) benchmark datasets without fine-tuning.
Anthology ID:
2025.knowledgenlp-1.5
Volume:
Proceedings of the 4th International Workshop on Knowledge-Augmented Methods for Natural Language Processing
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico, USA
Editors:
Weijia Shi, Wenhao Yu, Akari Asai, Meng Jiang, Greg Durrett, Hannaneh Hajishirzi, Luke Zettlemoyer
Venues:
KnowledgeNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
66–80
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.knowledgenlp-1.5/
DOI:
Bibkey:
Cite (ACL):
Jacopo D’Abramo, Andrea Zugarini, and Paolo Torroni. 2025. Investigating Large Language Models for Text-to-SPARQL Generation. In Proceedings of the 4th International Workshop on Knowledge-Augmented Methods for Natural Language Processing, pages 66–80, Albuquerque, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Investigating Large Language Models for Text-to-SPARQL Generation (D’Abramo et al., KnowledgeNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.knowledgenlp-1.5.pdf