Biao Qin
2024
No Need for Large-Scale Search: Exploring Large Language Models in Complex Knowledge Base Question Answering
Shouhui Wang
|
Biao Qin
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Knowledge Base Question Answering (KBQA) systems play a pivotal role in the domain of natural language processing and information retrieval. Its primary objective is to bridge the gap between natural language questions and structured knowledge representations, especially for complex KBQA. Despite the significant progress in developing effective and interconnected KBQA technologies, the recent emergence of large language models (LLMs) offers an opportunity to address the challenges faced by KBQA systems more efficiently. This study adopts the LLMs, such as Large Language Model Meta AI (LLaMA), as a channel to connect natural language questions with structured knowledge representations and proposes a Three-step Fine-tune Strategy based on large language model to implement the KBQA system (TFS-KBQA). This method achieves direct conversion from natural language questions to structured knowledge representations, thereby overcoming the limitations of existing KBQA methods, such as addressing large search and reasoning spaces and ranking massive candidates. To evaluate the effectiveness of the proposed method, we conduct experiments using three popular complex KBQA datasets. The results achieve state-of-the-art performance across all three datasets, with particularly notable results for the WebQuestionSP dataset, which achieves an F1 value of 79.9%.
Search