Entrospect: Information-Theoretic Self-Reflection Elicits Better Response Refinement of Small Language Models

Tianqiang Yan, Ziqiao Lin, Lin Zhang, Zhenglong Sun, Yuan Gao


Abstract
Self-reflection helps de-hallucinate Large Language Models (LLMs). However, the effectiveness of self-reflection remains insufficiently validated in the context of Small Language Models (SLMs), which exhibit limited semantic capacities. In particular, we demonstrate that the conventional self-reflection paradigm, such as Self-Refine, fails to deliver robust response refinement for models with parameter sizes of 10 billion or smaller, even when compared to generations elicited through Chain-of-Thought (CoT) prompting. To improve SLMs’ self-reflection, we redesign Self-Refine and introduce Entrospect (ENTROpy-aware IntroSPECTion), an information-theoretic framework based on prompt engineering.We evaluated Entrospect using accuracy and average time consumption metrics to comprehensively assess its precision and computational efficiency. Experiments conducted across four distinct SLMs and four baseline methods demonstrate that Entrospect achieves state-of-the-art performance on validation tasks. Notably, under identical model and data settings, Entrospect delivers a remarkable improvement of up to 36.2 in reasoning accuracy while enhancing computational efficiency by as much as 10 times compared to its predecessor, Self-Refine.
Anthology ID:
2025.findings-acl.1261
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24563–24577
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.1261/
DOI:
10.18653/v1/2025.findings-acl.1261
Bibkey:
Cite (ACL):
Tianqiang Yan, Ziqiao Lin, Lin Zhang, Zhenglong Sun, and Yuan Gao. 2025. Entrospect: Information-Theoretic Self-Reflection Elicits Better Response Refinement of Small Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 24563–24577, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Entrospect: Information-Theoretic Self-Reflection Elicits Better Response Refinement of Small Language Models (Yan et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.1261.pdf