Hanjia Lyu
2024
LLM-Rec: Personalized Recommendation via Prompting Large Language Models
Hanjia Lyu
|
Song Jiang
|
Hanqing Zeng
|
Yinglong Xia
|
Qifan Wang
|
Si Zhang
|
Ren Chen
|
Chris Leung
|
Jiajie Tang
|
Jiebo Luo
Findings of the Association for Computational Linguistics: NAACL 2024
Text-based recommendation holds a wide range of practical applications due to its versatility, as textual descriptions can represent nearly any type of item. However, directly employing the original item descriptions may not yield optimal recommendation performance due to the lack of comprehensive information to align with user preferences. Recent advances in large language models (LLMs) have showcased their remarkable ability to harness commonsense knowledge and reasoning. In this study, we introduce a novel approach, coined LLM-Rec, which incorporates four distinct prompting strategies of text enrichment for improving personalized text-based recommendations. Our empirical experiments reveal that using LLM-augmented text significantly enhances recommendation quality. Even basic MLP (Multi-Layer Perceptron) models achieve comparable or even better results than complex content-based methods. Notably, the success of LLM-Rec lies in its prompting strategies, which effectively tap into the language model’s comprehension of both general and specific item characteristics. This highlights the importance of employing diverse prompts and input augmentation techniques to boost the recommendation effectiveness of LLMs.
SoMeLVLM: A Large Vision Language Model for Social Media Processing
Xinnong Zhang
|
Haoyu Kuang
|
Xinyi Mou
|
Hanjia Lyu
|
Kun Wu
|
Siming Chen
|
Jiebo Luo
|
Xuanjing Huang
|
Zhongyu Wei
Findings of the Association for Computational Linguistics ACL 2024
The growth of social media, characterized by its multimodal nature, has led to the emergence of diverse phenomena and challenges, which calls for an effective approach to uniformly solve automated tasks. The powerful Large Vision Language Models make it possible to handle a variety of tasks simultaneously, but even with carefully designed prompting methods, the general domain models often fall short in aligning with the unique speaking style and context of social media tasks. In this paper, we introduce a Large Vision Language Model for Social Media Processing (SoMeLVLM), which is a cognitive framework equipped with five key capabilities including knowledge & comprehension, application, analysis, evaluation, and creation. SoMeLVLM is designed to understand and generate realistic social media behavior. We have developed a 654k multimodal social media instruction-tuning dataset to support our cognitive framework and fine-tune our model. Our experiments demonstrate that SoMeLVLM achieves state-of-the-art performance in multiple social media tasks. Further analysis shows its significant advantages over baselines in terms of cognitive abilities.
Search
Co-authors
- Jiebo Luo 2
- Song Jiang 1
- Hanqing Zeng 1
- Yinglong Xia 1
- Qifan Wang 1
- show all...