Abstract
While LLMs can provide reasoned explanations along with their answers, the nature and quality of those explanations are still poorly understood. In response, our goal is to define a detailed way of characterizing the explanation capabilities of modern models and to create a nuanced, interpretable explanation evaluation tool that can generate such characterizations automatically, without relying on expensive API calls or human annotations. Our approach is to (a) define the new task of explanation critiquing - identifying and categorizing any main flaw in an explanation and providing suggestions to address the flaw, (b) create a sizeable, human-verified dataset for this task, and (c) train an open-source, automatic critique model (called Digital Socrates) using this data. Through quantitative and qualitative analysis, we demonstrate how Digital Socrates is useful for revealing insights about student models by examining their reasoning chains, and how it can provide high-quality, nuanced, automatic evaluation of those model explanations for the first time. Digital Socrates thus fills an important gap in evaluation tools for understanding and improving the explanation behavior of models.- Anthology ID:
- 2024.acl-long.302
- Volume:
- Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand
- Editors:
- Lun-Wei Ku, Andre Martins, Vivek Srikumar
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5559–5586
- Language:
- URL:
- https://aclanthology.org/2024.acl-long.302
- DOI:
- 10.18653/v1/2024.acl-long.302
- Cite (ACL):
- Yuling Gu, Oyvind Tafjord, and Peter Clark. 2024. Digital Socrates: Evaluating LLMs through Explanation Critiques. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5559–5586, Bangkok, Thailand. Association for Computational Linguistics.
- Cite (Informal):
- Digital Socrates: Evaluating LLMs through Explanation Critiques (Gu et al., ACL 2024)
- PDF:
- https://preview.aclanthology.org/autopr/2024.acl-long.302.pdf