Wenya Wu


2025

pdf bib
Intent Contrastive Learning Based on Multi-view Augmentation for Sequential Recommendation
Bo Pei | Yingzheng Zhu | Guangjin Wang | Huajuan Duan | Wenya Wu | Fuyong Xu | Yizhao Zhu | Peiyu Liu | Ran Lu
Proceedings of the 31st International Conference on Computational Linguistics

Sequential recommendation systems play a key role in modern information retrieval. However, existing intent-related work fails to adequately capture long-term dependencies in user behavior, i.e., the influence of early user behavior on current behavior, and also fails to effectively utilize item relevance. To this end, we propose a novel sequential recommendation framework to overcome the above limitations, called ICMA. Specifically, we combine temporal variability with position encoding that has extrapolation properties to encode sequences, thereby expanding the model’s view of user behavior and capturing long-term user dependencies more effectively. Additionally, we design a multi-view data augmentation method, i.e., based on random data augmentation methods (e.g., crop, mask, and reorder), and further introduce insertion and substitution operations to augment the sequence data from different views by utilizing item relevance. Within this framework, clustering is performed to learn intent distributions, and these learned intents are integrated into the sequential recommendation model via contrastive SSL, which maximizes consistency between sequence views and their corresponding intents. The training process alternates between the Expectation (E) step and the Maximization (M) step. Experiments on three real datasets show that our approach improves by 0.8% to 14.7% compared to most baselines.

pdf bib
What Counts Underlying LLMs’ Moral Dilemma Judgments?
Wenya Wu | Weihong Deng
Proceedings of the Fourth Workshop on NLP for Positive Impact (NLP4PI)

Moral judgments in LLMs increasingly capture the attention of researchers in AI ethics domain. This study explores moral judgments of three open-source large language models (LLMs)—Qwen-1.5-14B, Llama3-8B, and DeepSeek-R1 in plausible moral dilemmas, examining their sensitivity to social exposure and collaborative decision-making. Using a dual-process framework grounded in deontology and utilitarianism, we evaluate LLMs’ responses to moral dilemmas under varying social contexts. Results reveal that all models are significantly influenced by moral norms rather than consequences, with DeepSeek-R1 exhibiting a stronger action tendency compared to Qwen-1.5-14B and Llama3-8B, which show higher inaction preferences. Social exposure and collaboration impact LLMs differently: Qwen-1.5-14B becomes less aligned with moral norms under observation, while DeepSeek-R1’s action tendency is moderated by social collaboration. These findings highlight the nuanced moral reasoning capabilities of LLMs and their varying sensitivity to social cues, providing insights into the ethical alignment of AI systems in socially embedded contexts.