Qiwei Bi
2022
MINER: Multi-Interest Matching Network for News Recommendation
Jian Li
|
Jieming Zhu
|
Qiwei Bi
|
Guohao Cai
|
Lifeng Shang
|
Zhenhua Dong
|
Xin Jiang
|
Qun Liu
Findings of the Association for Computational Linguistics: ACL 2022
Personalized news recommendation is an essential technique to help users find interested news. Accurately matching user’s interests and candidate news is the key to news recommendation. Most existing methods learn a single user embedding from user’s historical behaviors to represent the reading interest. However, user interest is usually diverse and may not be adequately modeled by a single user embedding. In this paper, we propose a poly attention scheme to learn multiple interest vectors for each user, which encodes the different aspects of user interest. We further propose a disagreement regularization to make the learned interests vectors more diverse. Moreover, we design a category-aware attention weighting strategy that incorporates the news category information as explicit interest signals into the attention mechanism. Extensive experiments on the MIND news recommendation benchmark demonstrate that our approach significantly outperforms existing state-of-the-art methods.
MTRec: Multi-Task Learning over BERT for News Recommendation
Qiwei Bi
|
Jian Li
|
Lifeng Shang
|
Xin Jiang
|
Qun Liu
|
Hanfang Yang
Findings of the Association for Computational Linguistics: ACL 2022
Existing news recommendation methods usually learn news representations solely based on news titles. To sufficiently utilize other fields of news information such as category and entities, some methods treat each field as an additional feature and combine different feature vectors with attentive pooling. With the adoption of large pre-trained models like BERT in news recommendation, the above way to incorporate multi-field information may encounter challenges: the shallow feature encoding to compress the category and entity information is not compatible with the deep BERT encoding. In this paper, we propose a multi-task method to incorporate the multi-field information into BERT, which improves its news encoding capability. Besides, we modify the gradients of auxiliary tasks based on their gradient conflicts with the main task, which further boosts the model performance. Extensive experiments on the MIND news recommendation benchmark show the effectiveness of our approach.
Search
Co-authors
- Jian Li 2
- Lifeng Shang 2
- Xin Jiang 2
- Qun Liu 2
- Jieming Zhu 1
- show all...