Yijiong Yu
2025
Accelerate Parallelizable Reasoning via Parallel Decoding within One Sequence
Yijiong Yu
|
Wei Wang
|
Ran Chen
|
Ji Pei
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Recent advances in reasoning models have demonstrated significant improvements in accuracy by employing detailed and comprehensive reasoning processes. However, generating these lengthy reasoning sequences is computationally expensive and time-consuming. To address this inefficiency, we leverage the inherent parallelizability of certain tasks to accelerate the reasoning process. Specifically, when multiple parallel reasoning steps exist, we decode multiple tokens per forward pass via a tree-like attention mask within a single sequence, avoiding additional memory usage. Experimental results show that our method achieves up to nearly 100% speedup in decoding while basically maintaining the answer quality. Our code is available in https://github.com/yuyijiong/parallel-decoding-in-one-sequence
Mitigate Position Bias in LLMs via Scaling a Single Hidden States Channel
Yijiong Yu
|
Huiqiang Jiang
|
Xufang Luo
|
Qianhui Wu
|
Chin-Yew Lin
|
Dongsheng Li
|
Yuqing Yang
|
Yongfeng Huang
|
Lili Qiu
Findings of the Association for Computational Linguistics: ACL 2025
Long-context language models (LCLMs) can process long context, but still exhibit position bias, also known as “lost in the middle”, which indicates placing key information in the middle of the context will significantly affect performance. To mitigating this, we first explore the micro-level manifestations of position bias, concluding that attention weights are a micro-level expression of position bias. Then we identify that, in addition to position embeddings, positional information in hidden states also contributes to position bias, and it manifests itself in specific channels of hidden states, called positional hidden states. Based on these, we propose a method to mitigate position bias by scaling positional hidden states. Experiments on NaturalQuestions Multi-document QA, KV retrieval and LongBench, using various models including RoPE models, context window-extended models, and Alibi models, demonstrate the effectiveness and generalizability of our approach. Our method can improve performance by up to 15.2% in “lost in the middle” benchmark by modifying just one channel of hidden states. Our code is available at https://aka.ms/PositionalHidden.
Search
Fix author
Co-authors
- Ran Chen 1
- Yongfeng Huang 1
- Huiqiang Jiang 1
- Dongsheng Li 1
- Chin-Yew Lin 1
- show all...