Zhicheng Wang
2025
Decoding in Latent Spaces for Efficient Inference in LLM-based Recommendation
Chengbing Wang
|
Yang Zhang
|
Zhicheng Wang
|
Tianhao Shi
|
Keqin Bao
|
Fuli Feng
|
Tat-Seng Chua
Findings of the Association for Computational Linguistics: EMNLP 2025
Fine-tuning large language models (LLMs) for recommendation in a generative manner has delivered promising results, but encounters significant inference overhead due to autoregressive decoding in the language space. This work explores bypassing language-space decoding by directly matching candidate items with the LLM’s internal thought representations in the latent space, eliminating the time-consuming autoregressive process to reduce computational costs. Towards this, we introduce Light Latent-space Decoding (L2D), an effective and efficient latent-space decoding method. L2D represents user-preferred items by using the hidden states of test sequences reflecting the LLM’s internal thought, and obtains candidate item representations from the hidden states of training sequences labeled with the corresponding candidate items. It then matches the two types of representations to decode items, achieving latent-space decoding. In this way, it enables efficient decoding without altering the LLM’s generative tuning paradigm, thereby preserving performance. Extensive empirical results demonstrate that L2D is more than 10x faster than language-space decoding while maintaining or enhancing performance.
2023
Rehearsal-free Continual Language Learning via Efficient Parameter Isolation
Zhicheng Wang
|
Yufang Liu
|
Tao Ji
|
Xiaoling Wang
|
Yuanbin Wu
|
Congcong Jiang
|
Ye Chao
|
Zhencong Han
|
Ling Wang
|
Xu Shao
|
Wenqiu Zeng
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
We study the problem of defying catastrophic forgetting when learning a series of language processing tasks. Compared with previous methods, we emphasize the importance of not caching history tasks’ data, which makes the problem more challenging. Our proposed method applies the parameter isolation strategy. For each task, it allocates a small portion of private parameters and learns them with a shared pre-trained model. To load correct parameters at testing time, we introduce a simple yet effective non-parametric method. Experiments on continual language learning benchmarks show that our method is significantly better than all existing no-data-cache methods, and is comparable (or even better) than those using historical data.