Ru Zhang


2025

pdf bib
GLoCIM: Global-view Long Chain Interest Modeling for news recommendation
Zhen Yang | Wenhui Wang | Tao Qi | Peng Zhang | TianYun Zhang | Ru Zhang | Jianyi Liu | Yongfeng Huang
Proceedings of the 31st International Conference on Computational Linguistics

Accurately recommending candidate news articles to users has always been the core challenge of news recommendation system. News recommendations often require modeling of user interest to match candidate news. Recent efforts have primarily focused on extracting local subgraph information in a global click graph constructed by the clicked news sequence of all users. However, the computational complexity of extracting global click graph information has hindered the ability to utilize far-reaching linkage which is hidden between two distant nodes in global click graph collaboratively among similar users. To overcome the problem above, we propose a Global-view Long Chain Interests Modeling for news recommendation (GLoCIM), which combines neighbor interest with long chain interest distilled from a global click graph, leveraging the collaboration among similar users to enhance news recommendation. We therefore design a long chain selection algorithm and long chain interest encoder to obtain global-view long chain interest from the global click graph. We design a gated network to integrate long chain interest with neighbor interest to achieve the collaborative interest among similar users. Subsequently we aggregate it with local news category-enhanced representation to generate final user representation. Then candidate news representation can be formed to match user representation to achieve news recommendation. Experimental results on real-world datasets validate the effectiveness of our method to improve the performance of news recommendation.

pdf bib
Neuron Activation Modulation for Text Style Transfer: Guiding Large Language Models
Chaona Kong | Jianyi Liu | Yifan Tang | Ru Zhang
Findings of the Association for Computational Linguistics: ACL 2025

Text style transfer (TST) aims to flexibly adjust the style of text while preserving its core content. Although large language models (LLMs) excel in TST tasks, they often face unidirectional issues due to imbalanced training data and their tendency to generate safer responses. These challenges present a significant obstacle in achieving effective style transfer. To address this issue, we propose a novel method for text style transfer based on neuron activation modulation (NAM-TST). This approach identifies neurons related to style through gradient-based activation difference analysis and calculates the activation differences between the source and target styles. During text generation, we use the activation difference to align the activation values of style-related neurons with those of the target style to guide the model in performing the transfer. This strategy enables the model to generate text that satisfies specific style requirements, effectively mitigating the unidirectional issue inherent in LLMs during style transfer. Experiments on benchmark datasets demonstrate that NAM-TST significantly enhances style transfer quality while preserving content consistency.