FlashBack: Efficient Retrieval-Augmented Language Modeling for Fast Inference
Runheng Liu, Xingchen Xiao, Heyan Huang, Zewen Chi, Zhijing Wu
Abstract
Retrieval-Augmented Language Modeling (RALM) by integrating large language models (LLM) with relevant documents from an external corpus is a proven methodology for enabling the LLM to generate information beyond the scope of its pre-training corpus. Previous work by retrieving a set of tokens iteratively with retrieved content prepending to the input poses a high runtime issue, which degrades the inference efficiency of the LLMs because they fail to use the Key-Value (KV) cache efficiently. We propose FlashBack, a modular RALM designed to improve the inference efficiency of RALM with appending context pattern while maintaining decent performance after fine-tuning by Low-Rank Adaption. FlashBack appends retrieved documents at the end of the context for efficiently utilizing the KV cache. We also introduce the Marking Token as two special prompt tokens for marking the appending context during fine-tuning. Our experiments show that FlashBack can improve language modeling performance in perplexity metric. We proved the Marking Token is a usable add-on when fine-tuning models on specific context patterns. By bypassing unnecessary re-computation, FlashBack achieves fast inference speed speed with long context input. The inference speed is up to 4× faster than the prepending counterpart on a 7B LLM (Llama 2) in the runtime test.- Anthology ID:
- 2025.findings-acl.33
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2025
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 595–608
- Language:
- URL:
- https://preview.aclanthology.org/landing_page/2025.findings-acl.33/
- DOI:
- Cite (ACL):
- Runheng Liu, Xingchen Xiao, Heyan Huang, Zewen Chi, and Zhijing Wu. 2025. FlashBack: Efficient Retrieval-Augmented Language Modeling for Fast Inference. In Findings of the Association for Computational Linguistics: ACL 2025, pages 595–608, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- FlashBack: Efficient Retrieval-Augmented Language Modeling for Fast Inference (Liu et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/landing_page/2025.findings-acl.33.pdf