Retrieval Enhanced Feedback via In-context Neural Error-book

Jongyeop Hyun, Bumsoo Kim


Abstract
Recent advancements in Large Language Models (LLMs) have significantly improved reasoning capabilities, with in-context learning (ICL) emerging as a key technique for adaptation without retraining. While previous works have focused on leveraging correct examples, recent research highlights the importance of learning from errors to enhance performance. However, existing methods lack a structured framework for analyzing and mitigating errors, particularly in Multimodal Large Language Models (MLLMs), where integrating visual and textual inputs adds complexity. To address this issue, we propose REFINE: Retrieval-Enhanced Feedback via In-context Neural Error-book, a teacher-student framework that systematically structures errors and provides targeted feedback. REFINE introduces three systematic queries to construct structured feedback—Feed-Target, Feed-Check, and Feed-Path—to enhance multimodal reasoning by prioritizing relevant visual information, diagnosing critical failure points, and formulating corrective actions. Unlike prior approaches that rely on redundant retrievals, REFINE optimizes structured feedback retrieval, improving inference efficiency, token usage, and scalability. Our results demonstrate substantial speedup, reduced computational costs, and successful generalization, highlighting REFINE’s potential for enhancing multimodal reasoning.
Anthology ID:
2025.emnlp-main.711
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14094–14109
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.711/
DOI:
Bibkey:
Cite (ACL):
Jongyeop Hyun and Bumsoo Kim. 2025. Retrieval Enhanced Feedback via In-context Neural Error-book. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 14094–14109, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Retrieval Enhanced Feedback via In-context Neural Error-book (Hyun & Kim, EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.711.pdf
Checklist:
 2025.emnlp-main.711.checklist.pdf