TAGEQA: Text–And–Graph for Event Question Answering via Structured Prompting Strategies

Maithili Sanjay Kadam, Francis Ferraro


Abstract
Large language models (LLMs) excel at general language tasks but often struggle with event-based questions—especially those requiring causal or temporal reasoning. We introduce TAG-EQA (Text-And-Graph for Event Question Answering), a prompting framework that injects causal event graphs into LLM inputs by converting structured relations into natural-language statements. TAG-EQA spans nine prompting configurations, combining three strategies (zero-shot, few-shot, chain-of-thought) with three input modalities (text-only, graph-only, text+graph), enabling a systematic analysis of when and how structured knowledge aids inference. On the TORQUESTRA benchmark, TAG-EQA improves accuracy by ~5% on average over text-only baselines, with gains up to ~12% in zero-shot settings and ~18% when graph-augmented CoT prompting is effective. While performance varies by model and configuration, our findings show that causal graphs can enhance event reasoning in LLMs without fine-tuning, offering a flexible way to encode structure in prompt-based QA.
Anthology ID:
2025.starsem-1.24
Volume:
Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Lea Frermann, Mark Stevenson
Venue:
*SEM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
304–315
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.24/
DOI:
Bibkey:
Cite (ACL):
Maithili Sanjay Kadam and Francis Ferraro. 2025. TAG–EQA: Text–And–Graph for Event Question Answering via Structured Prompting Strategies. In Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025), pages 304–315, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
TAG–EQA: Text–And–Graph for Event Question Answering via Structured Prompting Strategies (Kadam & Ferraro, *SEM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.24.pdf