Generating executable logical forms (LF) using Large Language Models (LLMs) in a few-shot setting for Knowledge Graph Question Answering (KGQA) is becoming popular. However, their performance is still limited due to very little exposure to the LF during pre-training of LLMs, resulting in many syntactically incorrect LF generation. If the LF generation task can be transformed to a more familiar task for the LLM, it can potentially reduce the syntax errors and elevate the generation quality. On the other hand, there exist specialized LLMs trained/fine-tuned on code in many programming languages. They can be leveraged to generate the LF as step-wise constrained code expression generation using modular functions in the LF. Based on this insight, we propose CodeAlignKGQA: a framework that aligns the LF generation as code generation that incorporates LF-specific constraints. We extract the question-specific subgraph information to enable Knowledge-Aware code generation. We additionally introduce a dynamic self-code-correction mechanism, to be applied as required. Our extensive experiments on Complex KGQA benchmarks such as KQA Pro demonstrate the effectiveness of our approach. CodeAlignKGQA surpasses all few-shot baselines on KQA Pro by 21%, achieving a new state-of-the-art.
Language Models (LMs) are primarily evaluated on globally popular sports, often overlooking regional and indigenous sporting traditions. To address this gap, we introduce CultSportQA, a benchmark designed to assess LMs’ understanding of traditional sports across 60 countries and 6 continents, encompassing four distinct cultural categories. The dataset features 33,000 multiple-choice questions (MCQs) across text and image modalities, categorized into primarily three key types: history-based, rule-based, and scenario-based. To evaluate model performance, we employ zero-shot, few-shot, and chain-of-thought (CoT) prompting across a diverse set of Large Language Models (LLMs), Small Language Models (SLMs), and Multimodal Large Language Models (MLMs). By providing a comprehensive multilingual and multicultural sports benchmark, CultSportQA establishes a new standard for assessing AI’s ability to understand and reason about traditional sports. The dataset will be publicly available, fostering research in culturally aware AI systems.
Semantic Parsing of natural language questions into their executable logical form (LF) has shown state-of-the-art (SOTA) performance for Knowledge Graph Question Answering (KGQA). However, these methods are not applicable for real-world applications, due to lack of KG-specific training data. Recent advances in the capabilities of Large Language Models (LLMs) has led towards generating low-level LFs such as SPARQL and S-Expression in a few-shot setting. Unfortunately, these methods: (1) are limited to the knowledge of underlying LLM about the LF, (2) performs inferior for the harder complex benchmarks such as KQA Pro, (3) suffers while grounding the generated LF to a specific Knowledge Graph. Recently, a new LF called KoPL has been introduced that explicitly models complex reasoning process step-by-step in a symbolic manner and has shown SOTA on KQA Pro in fully-supervised setting. Inspired by this, we propose SymKGQA framework that generates step-by-step Symbolic LF i.e., KoPL in a few-shot in-context learning setting using LLM. Our framework is not dependent on pre-trained information of LLM about KoPL. We further build a Retrieval-Augmented Generation based Question-Aware Contextual KoPL (QUACK) resolver to ground the generated LF. Our experiments with different LLMs and few-shot settings demonstrate that SymKGQA outperforms all other few-shot and even many of the fully-supervised KGQA approaches.