2025
pdf
bib
abs
ZigZagKV: Dynamic KV Cache Compression for Long-context Modeling based on Layer Uncertainty
Meizhi Zhong
|
Xikai Liu
|
Chen Zhang
|
Yikun Lei
|
Yan Gao
|
Yao Hu
|
Kehai Chen
|
Min Zhang
Proceedings of the 31st International Conference on Computational Linguistics
Large Language models (LLMs) have become a research hotspot. To accelerate the inference of LLMs, storing computed caches in memory has become the standard technique. However, as the inference length increases, growing KV caches might lead to out-of-memory issues. Many existing methods address this issue through KV cache compression, primarily by preserving key tokens throughout all layers to reduce information loss. Most of them allocate a uniform budget size for each layer to retain. However, we observe that the minimum budget sizes needed to retain essential information vary across layers and models based on the perspectives of attention and hidden state output. Building on this observation, this paper proposes a simple yet effective KV cache compression method that leverages layer uncertainty to allocate budget size for each layer. Experimental results show that the proposed method can reduce memory usage of the KV caches to only ~20% when compared to full KV inference while achieving nearly lossless performance.
pdf
bib
abs
Understanding the RoPE Extensions of Long-Context LLMs: An Attention Perspective
Meizhi Zhong
|
Chen Zhang
|
Yikun Lei
|
Xikai Liu
|
Yan Gao
|
Yao Hu
|
Kehai Chen
|
Min Zhang
Proceedings of the 31st International Conference on Computational Linguistics
Enabling LLMs to handle lengthy context is currently a research hotspot. Most LLMs are built upon rotary position embedding (RoPE), a popular position encoding method. Therefore, a prominent path is to extrapolate the RoPE trained on comparably short texts to far longer texts. A heavy bunch of efforts have been dedicated to boosting the extrapolation via extending the formulations of the RoPE, however, few of them have attempted to showcase their inner workings comprehensively. In this paper, we are driven to offer a straightforward yet in-depth understanding of RoPE extensions from an attention perspective and on two benchmarking tasks. A broad array of experiments reveals several valuable findings: 1) Maintaining attention patterns to those at the pretrained length improves extrapolation; 2) Large attention uncertainty leads to retrieval errors; 3) Using longer continual pretraining lengths for RoPE extensions could reduce attention uncertainty and significantly enhance extrapolation.
pdf
bib
abs
DecEx-RAG: Boosting Agentic Retrieval-Augmented Generation with Decision and Execution Optimization via Process Supervision
Yongqi Leng
|
Yikun Lei
|
Xikai Liu
|
Meizhi Zhong
|
Bojian Xiong
|
Yurong Zhang
|
Yan Gao
|
Yiwu
|
Yao Hu
|
Deyi Xiong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Agentic Retrieval-Augmented Generation (Agentic RAG) enhances the processing capability for complex tasks through dynamic retrieval and adaptive workflows. Recent advances (e.g., Search-R1) have shown that outcome-supervised reinforcement learning demonstrate strong performance. However, this approach still suffers from inefficient exploration, sparse reward signals, and ambiguous global reward feedback.To address these challenges, we propose DecEx-RAG, which models RAG as a Markov Decision Process (MDP) incorporating decision-making and execution, while introducing an efficient pruning strategy to optimize data expansion. Through comprehensive process-level policy optimization, DecEx-RAG significantly enhances the autonomous task decomposition, dynamic retrieval, and high-quality answer generation capabilities of large language models (LLMs). Experiments show that DecEx-RAG achieves an average absolute performance improvement of 6.2% across six datasets, significantly outperforming existing baselines. Moreover, the pruning strategy improves data construction efficiency by nearly 
6 ×, providing an efficient solution for process-supervised RAG training. The code is available at 
https://github.com/sdsxdxl/DecEx-RAG.
pdf
bib
abs
Think-Search-Patch: A Retrieval-Augmented Reasoning Framework for Repository-Level Code Repair
Bojian Xiong
|
Yikun Lei
|
Xikai Liu
|
Shaowei Zhang
|
Pengyun Zhu
|
Yan Liu
|
Yongqi Leng
|
Ling Shi
|
Meizhi Zhong
|
Yurong Zhang
|
Yan Gao
|
Yiwu
|
Yao Hu
|
Deyi Xiong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Large language models usually suffer from multiple-file coding scenarios where strong inter-file dependencies manifest, typically demonstrated in SWE-bench. To mitigate this issue, we propose Think-Search-Patch (TSP), a retrieval-augmented reasoning framework for repository-level code repair. At the Think stage, our system breaks down a coding task and creates clear search query. Next, at the Search stage, it retrieves relevant code snippets using models like E5. At the final Patch stage, it generates standardized patches based on the key snippets. In addition the proposed framework, we enhance system reliability through a two-stage training process. At the first stage, the system undergoes supervised fine-tuning (SFT) on our TSP dataset. At the subsequent stage, we employ rejection sampling with correction to generate preference pairs for Direct Preference Optimization (DPO) training, thereby reducing errors in the intermediate phases. Experimental results demonstrate that TSP framework enhances retrieval accuracy and repair success on SWE-bench Lite, even surpassing models with a larger size in managing extensive code contexts and successfully addressing bugs spanning across multiple files. All data and code available at https://github.com/Gengar0215/TSP-framework.
2023
pdf
bib
abs
2INER: Instructive and In-Context Learning on Few-Shot Named Entity Recognition
Jiasheng Zhang
|
Xikai Liu
|
Xinyi Lai
|
Yan Gao
|
Shusen Wang
|
Yao Hu
|
Yiqing Lin
Findings of the Association for Computational Linguistics: EMNLP 2023
Prompt-based learning has emerged as a powerful technique in natural language processing (NLP) due to its ability to leverage pre-training knowledge for downstream few-shot tasks. In this paper, we propose 2INER, a novel text-to-text framework for Few-Shot Named Entity Recognition (NER) tasks. Our approach employs instruction finetuning based on InstructionNER to enable the model to effectively comprehend and process task-specific instructions, including both main and auxiliary tasks. We also introduce a new auxiliary task, called Type Extracting, to enhance the model’s understanding of entity types in the overall semantic context of a sentence. To facilitate in-context learning, we concatenate examples to the input, enabling the model to learn from additional contextual information. Experimental results on four datasets demonstrate that our approach outperforms existing Few-Shot NER methods and remains competitive with state-of-the-art standard NER algorithms.
2019
pdf
bib
abs
Telling the Whole Story: A Manually Annotated Chinese Dataset for the Analysis of Humor in Jokes
Dongyu Zhang
|
Heting Zhang
|
Xikai Liu
|
Hongfei Lin
|
Feng Xia
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Humor plays important role in human communication, which makes it important problem for natural language processing. Prior work on the analysis of humor focuses on whether text is humorous or not, or the degree of funniness, but this is insufficient to explain why it is funny. We therefore create a dataset on humor with 9,123 manually annotated jokes in Chinese. We propose a novel annotation scheme to give scenarios of how humor arises in text. Specifically, our annotations of linguistic humor not only contain the degree of funniness, like previous work, but they also contain key words that trigger humor as well as character relationship, scene, and humor categories. We report reasonable agreement between annota-tors. We also conduct an analysis and exploration of the dataset. To the best of our knowledge, we are the first to approach humor annotation for exploring the underlying mechanism of the use of humor, which may contribute to a significantly deeper analysis of humor. We also contribute with a scarce and valuable dataset, which we will release publicly.
pdf
bib
Transformer-Based Capsule Network For Stock Movement Prediction
Jintao Liu
|
Hongfei Lin
|
Xikai Liu
|
Bo Xu
|
Yuqi Ren
|
Yufeng Diao
|
Liang Yang
Proceedings of the First Workshop on Financial Technology and Natural Language Processing