QRMeM: Unleash the Length Limitation through Question then Reflection Memory Mechanism
Bo Wang, Heyan Huang, Yixin Cao, Jiahao Ying, Wei Tang, Chong Feng
Abstract
While LLMs have made notable advancements in natural language processing, they continue to struggle with processing extensive text. Memory mechanisms offer a flexible solution for managing long contexts, utilizing techniques such as compression, summarization, and structuring to facilitate nuanced and efficient handling of large volumes of text. However, existing techniques face challenges with static knowledge integration, leading to insufficient adaptation to task-specific needs and missing multi-segmentation relationships, which hinders the dynamic reorganization and logical combination of relevant segments during the response process. To address these issues, we introduce a novel strategy, Question then Reflection Memory Mechanism (QRMeM), which incorporates a dual-structured memory pool. This pool synergizes static textual content with structured graph guidance, fostering a reflective trial-and-error approach for navigating and identifying relevant segments. Our evaluation across multiple-choice questions (MCQ) and multi-document question answering (Multi-doc QA) benchmarks showcases QRMeM’s enhanced performance compared to existing approaches.- Anthology ID:
- 2024.findings-emnlp.278
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2024
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4837–4851
- Language:
- URL:
- https://preview.aclanthology.org/add-emnlp-2024-awards/2024.findings-emnlp.278/
- DOI:
- 10.18653/v1/2024.findings-emnlp.278
- Cite (ACL):
- Bo Wang, Heyan Huang, Yixin Cao, Jiahao Ying, Wei Tang, and Chong Feng. 2024. QRMeM: Unleash the Length Limitation through Question then Reflection Memory Mechanism. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 4837–4851, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- QRMeM: Unleash the Length Limitation through Question then Reflection Memory Mechanism (Wang et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/add-emnlp-2024-awards/2024.findings-emnlp.278.pdf