Zhen Wen


2025

pdf bib
Rethinking Large Language Model Architectures for Sequential Recommendations
Hanbing Wang | Xiaorui Liu | Wenqi Fan | Xiangyu Zhao | Venkataramana Kini | Devendra Pratap Yadav | Fei Wang | Zhen Wen | Hui Liu
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics

In recent times, there has been a shift towards adapting sequential recommendation to LLM paradigm to harness the capabilities of LLMs. These methods typically formulate recommendation data into natural language and train the model to forecast the subsequent item in an auto-regressive manner. Despite their notable success, the significant computational burden during inference poses a major challenge to their practical implementation. In this study, we aim to streamline current LLM-based recommendation models and introduce a straightforward yet highly effective model Lite-LLM4Rec. The primary objective of Lite-LLM4Rec is to ensure efficient inference for the sequential recommendation task. Lite-LLM4Rec circumvents the step-by-step beam search decoding by employing a direct item projection head to produce ranking scores in one step. This design arises from our empirical finding that beam search decoding is ultimately unnecessary for sequential recommendations. Additionally, Lite-LLM4Rec introduces a hierarchical LLM structure crafted to efficiently handle the extensive contextual information of items and redundant computation issue, thus diminishing computational overhead while enjoying the power of LLMs. Experiments on four publicly available datasets validate the efficacy of Lite-LLM4Rec in enhancing both performance and inference efficiency (notably 46.8% performance improvement and 99.48% efficiency improvement on ML-1m) compared to existing LLM-based methods. Our implementations are available at: https://github.com/HanbingWang2001/Lite-LLM4Rec-PyTorch.

2015

pdf bib
Context-aware Entity Morph Decoding
Boliang Zhang | Hongzhao Huang | Xiaoman Pan | Sujian Li | Chin-Yew Lin | Heng Ji | Kevin Knight | Zhen Wen | Yizhou Sun | Jiawei Han | Bulent Yener
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

2014

pdf bib
Be Appropriate and Funny: Automatic Entity Morph Encoding
Boliang Zhang | Hongzhao Huang | Xiaoman Pan | Heng Ji | Kevin Knight | Zhen Wen | Yizhou Sun | Jiawei Han | Bulent Yener
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2013

pdf bib
Resolving Entity Morphs in Censored Data
Hongzhao Huang | Zhen Wen | Dian Yu | Heng Ji | Yizhou Sun | Jiawei Han | He Li
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)