Samarth Oza
2025
Tree-of-Quote Prompting Improves Factuality and Attribution in Multi-Hop and Medical Reasoning
Justin Xu
|
Yiming Li
|
Zizheng Zhang
|
Augustine Yui Hei Luk
|
Mayank Jobanputra
|
Samarth Oza
|
Ashley Murray
|
Meghana Reddy Kasula
|
Andrew Parker
|
David W Eyre
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) can produce fluent but factually incorrect outputs and often have limited ability to attribute their claims to source material. This undermines their reliability, particularly in multi-hop and high-stakes domains such as medicine. We propose Tree-of-Quote (ToQ) prompting, a structured framework that decomposes complex questions into subquestions, generates quotes to support each step without retrieval, and selectively advances reasoning based on quote quality. We also introduce FQ-Score, a unified metric that captures answer correctness, attribution fidelity, and reasoning quality. Experiments on StrategyQA, 2WikiMultiHopQA, MuSiQue, MoreHopQA, and MedQA demonstrate that ToQ improves factuality and attribution over standard prompting baselines. To validate FQ-Score as a proxy for human judgment, we conduct two reader studies with clinicians on medical questions, and observe strong correlations. Both clinician scores and FQ-Scores also indicate a preference for ToQ over baselines due to a combination of greater correctness, completeness, and logical flow. Our results suggest ToQ is a promising approach for building more trustworthy and auditable LLM systems.
Search
Fix author
Co-authors
- David W Eyre 1
- Mayank Jobanputra 1
- Meghana Reddy Kasula 1
- Yiming Li 1
- Augustine Yui Hei Luk 1
- show all...