Efficient Real-time Refinement of Language Model Text Generation

Joonho Ko, Jinheon Baek, Sung Ju Hwang


Abstract
Large language models (LLMs) have shown remarkable performance across a wide range of natural language tasks. However, a critical challenge remains in that they sometimes generate factually incorrect answers. To address this, while many previous work has focused on identifying errors in their generation and further refining them, they are slow in deployment since they are designed to verify the response from LLMs only after their entire generation (from the first to last tokens) is done. Further, we observe that once LLMs generate incorrect tokens early on, there is a higher likelihood that subsequent tokens will also be factually incorrect. To this end, in this work, we propose Streaming-VR (Streaming Verification and Refinement), a novel approach designed to enhance the efficiency of verification and refinement of LLM outputs. Specifically, the proposed Streaming-VR enables on-the-fly verification and correction of tokens as they are being generated, similar to a streaming process, ensuring that each subset of tokens is checked and refined in real-time by another LLM as the LLM constructs its response. Through comprehensive evaluations on multiple datasets, we demonstrate that our approach not only enhances the factual accuracy of LLMs, but also offers a more efficient solution compared to prior refinement methods.
Anthology ID:
2025.emnlp-main.1753
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
34548–34561
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1753/
DOI:
Bibkey:
Cite (ACL):
Joonho Ko, Jinheon Baek, and Sung Ju Hwang. 2025. Efficient Real-time Refinement of Language Model Text Generation. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 34548–34561, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Efficient Real-time Refinement of Language Model Text Generation (Ko et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1753.pdf
Checklist:
 2025.emnlp-main.1753.checklist.pdf