Alicia Yi Sun
2025
Improving Factuality with Explicit Working Memory
Mingda Chen
|
Yang Li
|
Karthik Padthe
|
Rulin Shao
|
Alicia Yi Sun
|
Luke Zettlemoyer
|
Gargi Ghosh
|
Wen-tau Yih
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models can generate factually inaccurate content, a problem known as hallucination. Recent works have built upon retrieved-augmented generation to improve factuality through iterative prompting but these methods are limited by the traditional RAG design. To address these challenges, we introduce Ewe (Explicit Working Memory), a novel approach that enhances factuality in long-form text generation by integrating a working memory that receives real-time feedback from external resources. The memory is refreshed based on online fact-checking and retrieval feedback, allowing Ewe to rectify false claims during the generation process and ensure more accurate and reliable outputs. Our experiments demonstrate that Ewe outperforms strong baselines on four fact-seeking long-form generation datasets, increasing the factuality metric, VeriScore, by 2 to 6 points absolute without sacrificing the helpfulness of the responses. Further analysis reveals that the design of rules for memory updates, configurations of memory units, and the quality of the retrieval datastore are crucial factors for influencing model performance.
Search
Fix author
Co-authors
- Mingda Chen 1
- Gargi Ghosh 1
- Yang Li (李旸) 1
- Karthik Padthe 1
- Rulin Shao 1
- show all...
Venues
- acl1