SocREval: Large Language Models with the Socratic Method for Reference-free Reasoning Evaluation

Hangfeng He, Hongming Zhang, Dan Roth


Abstract
To comprehensively gauge the capacity of current models for complex reasoning, it is crucial to assess their step-by-step reasoning in a scalable manner. Established reference-based evaluation metrics rely on human-annotated reasoning chains as references to assess the model-derived chains. However, such “gold-standard” human-written reasoning chains may not be unique and their acquisition is often labor-intensive. Existing reference-free reasoning evaluation metrics, while eliminating the need for human-crafted reasoning chains as references, often require fine-tuning with human-derived chains before evaluation, complicating the process and questioning their adaptability to other datasets. To address these challenges, we harness GPT-4 to automatically evaluate reasoning chain quality, thereby removing the dependency on human-written reasoning chains for both model fine-tuning and evaluative purposes. Leveraging the Socratic method, we develop SocREval (**Soc**ratic Method-Inspired **R**easoning **Eval**uation), a novel approach for prompt design in reference-free reasoning evaluation. Empirical results from four human annotated datasets reveal that SocREval significantly improves GPT-4’s performance, surpassing existing reference-free and reference-based reasoning evaluation metrics. Beyond its demonstrated efficacy, SocREval, proves to be both cost-efficient and robust to prompt writing and example selection, as substantiated by our in-depth analysis.
Anthology ID:
2024.findings-naacl.175
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2736–2764
Language:
URL:
https://aclanthology.org/2024.findings-naacl.175
DOI:
10.18653/v1/2024.findings-naacl.175
Bibkey:
Cite (ACL):
Hangfeng He, Hongming Zhang, and Dan Roth. 2024. SocREval: Large Language Models with the Socratic Method for Reference-free Reasoning Evaluation. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 2736–2764, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
SocREval: Large Language Models with the Socratic Method for Reference-free Reasoning Evaluation (He et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2024.findings-naacl.175.pdf