Yanwei Song
2025
Item-Language Model: Improving Large Language Model for Recommendation via Item-Language Representation Learning
Li Yang
|
Anushya Subbiah
|
Hardik Patel
|
Judith Yue Li
|
Yanwei Song
|
Reza Mirghaderi
|
Vikram Aggarwal
|
Fuli Feng
|
Zenglin Xu
|
Dongfang Liu
|
Qifan Wang
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Large Language Models (LLMs) have recently made significant advancements in tackling complex tasks, such as retrieving hard-to-find information and solving intricate problems. Consequently, various approaches have been proposed to integrate LLMs into recommender systems, primarily by embedding them within existing architectures or training them on the recommendation data. However, most existing methods fail to effectively incorporate user-item interaction signals into pretrained LLMs due to the modality gap between interaction data and the LLM’s internal knowledge. To address this challenge, we propose the Item-Language Model (ILM) to enhance LLMs for recommendation. ILM consists of two main components: An item-language representation learning module, where an ILM encoder is pretrained to generate text-aligned item representations. And an item-language co-training module, where the ILM encoder is integrated into a pretrained LLM for the recommendation tasks. Extensive experiments demonstrate the superior performance of our approach over several state-of-the-art methods, validating the importance of text-aligned item representations in bridging this modality gap. Our ablation studies further reveal the effectiveness of our model design for integrating the interaction knowledge into LLMs for recommendation tasks. Our code is available at: https://anonymous.4open.science/r/ILM-7AD4/.
Search
Fix author
Co-authors
- Vikram Aggarwal 1
- Fuli Feng 1
- Judith Yue Li 1
- Dongfang Liu 1
- Reza Mirghaderi 1
- show all...