ERGO: Entropy-guided Resetting for Generation Optimization in Multi-turn Language Models
Haziq Mohammad Khalid, Athikash Jeyaganthan, Timothy Do, Yicheng Fu, Vasu Sharma, Sean O’Brien, Kevin Zhu
Abstract
Large Language Models (LLMs) suffer significant performance degradation in multi-turn conversations when information is presented incrementally. Given that multi-turn conversations characterize everyday interactions with LLMs, this degradation poses a severe challenge to real world usability. We hypothesize that abrupt increases in model uncertainty signal misalignment in multi-turn LLM interactions, and we exploit this insight to dynamically realign conversational context. We introduce ERGO (Entropy-guided Resetting for Generation Optimization), which continuously quantifies internal uncertainty via Shannon entropy over next token distributions and triggers adaptive prompt consolidation when a sharp spike in entropy is detected. By treating uncertainty as a first class signal rather than a nuisance to eliminate, ERGO embraces variability in language and modeling, representing and responding to uncertainty. In multi-turn tasks with incrementally revealed instructions, ERGO yields a 56.6% average performance gain over standard baselines, increases aptitude (peak performance capability) by 24.7%, and decreases unreliability (variability in performance) by 35.3%, demonstrating that uncertainty aware interventions can improve both accuracy and reliability in conversational AI.- Anthology ID:
- 2025.uncertainlp-main.23
- Volume:
- Proceedings of the 2nd Workshop on Uncertainty-Aware NLP (UncertaiNLP 2025)
- Month:
- November
- Year:
- 2025
- Address:
- Suzhou, China
- Editor:
- Noidea Noidea
- Venues:
- UncertaiNLP | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 273–286
- Language:
- URL:
- https://preview.aclanthology.org/ingest-emnlp/2025.uncertainlp-main.23/
- DOI:
- Cite (ACL):
- Haziq Mohammad Khalid, Athikash Jeyaganthan, Timothy Do, Yicheng Fu, Vasu Sharma, Sean O’Brien, and Kevin Zhu. 2025. ERGO: Entropy-guided Resetting for Generation Optimization in Multi-turn Language Models. In Proceedings of the 2nd Workshop on Uncertainty-Aware NLP (UncertaiNLP 2025), pages 273–286, Suzhou, China. Association for Computational Linguistics.
- Cite (Informal):
- ERGO: Entropy-guided Resetting for Generation Optimization in Multi-turn Language Models (Mohammad Khalid et al., UncertaiNLP 2025)
- PDF:
- https://preview.aclanthology.org/ingest-emnlp/2025.uncertainlp-main.23.pdf