StructFact: Reasoning Factual Knowledge from Structured Data with Large Language Models
Sirui Huang, Yanggan Gu, Zhonghao Li, Xuming Hu, Li Qing, Guandong Xu
Abstract
Large language models (LLMs) have made significant strides in natural language processing by leveraging their ability to comprehend and reason with factual knowledge. However, a significant amount of factual knowledge is stored in structured data, which has unique characteristics not typically encountered in the unstructured texts used for pretraining LLMs. To evaluate the capability of LLMs in handling facts structurally stored, we introduce a benchmark called StructFact, which includes meticulously annotated factual questions, spanning five tasks that reflect the intrinsic properties of structured data. This benchmark aims to delineate the strengths and limitations of LLMs in reasoning with structured data for knowledge-intensive tasks in practical applications. Extensive experiments conducted on 10 common LLMs have yielded several insights, one notable finding being that these models struggle significantly with the heterogeneity of structured data during reasoning.- Anthology ID:
- 2025.findings-acl.391
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2025
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7521–7552
- Language:
- URL:
- https://preview.aclanthology.org/landing_page/2025.findings-acl.391/
- DOI:
- Cite (ACL):
- Sirui Huang, Yanggan Gu, Zhonghao Li, Xuming Hu, Li Qing, and Guandong Xu. 2025. StructFact: Reasoning Factual Knowledge from Structured Data with Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 7521–7552, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- StructFact: Reasoning Factual Knowledge from Structured Data with Large Language Models (Huang et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/landing_page/2025.findings-acl.391.pdf