Zhenghua Wang


2025

pdf bib
Revisiting Jailbreaking for Large Language Models: A Representation Engineering Perspective
Tianlong Li | Zhenghua Wang | Wenhao Liu | Muling Wu | Shihan Dou | Changze Lv | Xiaohua Wang | Xiaoqing Zheng | Xuanjing Huang
Proceedings of the 31st International Conference on Computational Linguistics

The recent surge in jailbreaking attacks has revealed significant vulnerabilities in Large Language Models (LLMs) when exposed to malicious inputs. While various defense strategies have been proposed to mitigate these threats, there has been limited research into the underlying mechanisms that make LLMs vulnerable to such attacks. In this study, we suggest that the self-safeguarding capability of LLMs is linked to specific activity patterns within their representation space. Although these patterns have little impact on the semantic content of the generated text, they play a crucial role in shaping LLM behavior under jailbreaking attacks. Our findings demonstrate that these patterns can be detected with just a few pairs of contrastive queries. Extensive experimentation shows that the robustness of LLMs against jailbreaking can be manipulated by weakening or strengthening these patterns. Further visual analysis provides additional evidence for our conclusions, providing new insights into the jailbreaking phenomenon. These findings highlight the importance of addressing the potential misuse of open-source LLMs within the community.

pdf bib
UPLex: Fine-Grained Personality Control in Large Language Models via Unsupervised Lexical Modulation
Tianlong Li | Wenhao Liu | Muling Wu | Shihan Dou | Zhenghua Wang | Changze Lv | Xiaohua Wang | Xiaoqing Zheng | Xuanjing Huang
Findings of the Association for Computational Linguistics: EMNLP 2025

Personality is a crucial factor that shapes human communication patterns, thereby regulating the personalities of large language models (LLMs) holds significant potential in enhancing their user experiences. Previous approaches either relied on fine-tuning LLMs on specific corpora or required manually crafted prompts to evoke specific personalities from LLMs. However, the former is inefficient and costly, while the latter cannot precisely manipulate personality traits at a fine-grained level. To address these challenges, we propose UPLex, a method that uses an Unsupervisedly-Built Personalized Lexicon (UPL) during the decoding phase to manipulate LLM’s personality traits. UPLex can be constructed from a newly built situational judgment test dataset in an unsupervised fashion and used to modulate the personality expression of LLMs by dynamically altering their predicted probability of upcoming words in a pluggable fashion. Extensive experimentation demonstrates the remarkable effectiveness and pluggability of our method for fine-grained manipulation of LLMs’ personalities.

pdf bib
Enhancing Model Privacy in Federated Learning with Random Masking and Quantization
Zhibo Xu | Zhu JianHao | Jingwen Xu | Changze Lv | Zhenghua Wang | Zisu Huang | Xiaohua Wang | Muling Wu | Qi Qian | Xiaoqing Zheng | Xuanjing Huang
Findings of the Association for Computational Linguistics: EMNLP 2025

The primary goal of traditional federated learning is to protect data privacy by enabling distributed edge devices to collaboratively train a shared global model while keeping raw data decentralized at local clients. The rise of large language models (LLMs) has introduced new challenges in distributed systems, as their substantial computational requirements and the need for specialized expertise raise critical concerns about protecting intellectual property (IP). This highlights the need for a federated learning approach that can safeguard both sensitive data and proprietary models. To tackle this challenge, we propose FedQSN, a federated learning approach that leverages random masking to obscure a subnetwork of model parameters and applies quantization to the remaining parameters. Consequently, the server transmits only a privacy-preserving proxy of the global model to clients during each communication round, thus enhancing the model’s confidentiality. Experimental results across various models and tasks demonstrate that our approach not only maintains strong model performance in federated learning settings but also achieves enhanced protection of model parameters compared to baseline methods.

2024

pdf bib
Searching for Best Practices in Retrieval-Augmented Generation
Xiaohua Wang | Zhenghua Wang | Xuan Gao | Feiran Zhang | Yixin Wu | Zhibo Xu | Tianyuan Shi | Zhengyuan Wang | Shizheng Li | Qi Qian | Ruicheng Yin | Changze Lv | Xiaoqing Zheng | Xuanjing Huang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Retrieval-augmented generation (RAG) techniques have proven to be effective in integrating up-to-date information, mitigating hallucinations, and enhancing response quality, particularly in specialized domains. While many RAG approaches have been proposed to enhance large language models through query-dependent retrievals, these approaches still suffer from their complex implementation and prolonged response times. Typically, a RAG workflow involves multiple processing steps, each of which can be executed in various ways. Here, we investigate existing RAG approaches and their potential combinations to identify optimal RAG practices. Through extensive experiments, we suggest several strategies for deploying RAG that balance both performance and efficiency. Moreover, we demonstrate that multimodal retrieval techniques can significantly enhance question-answering capabilities about visual inputs and accelerate the generation of multimodal content using a “retrieval as generation” strategy.