Houcheng Jiang
2025
Neuron-Level Sequential Editing for Large Language Models
Houcheng Jiang
|
Junfeng Fang
|
Tianyu Zhang
|
Baolong Bi
|
An Zhang
|
Ruipeng Wang
|
Tao Liang
|
Xiang Wang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
This work explores sequential model editing in large language models (LLMs), a critical task that involves modifying internal knowledge within LLMs continuously through multi-round editing, each incorporating updates or corrections to adjust the model’s outputs without the need for costly retraining. Existing model editing methods, especially those that alter model parameters, typically focus on single-round editing and often face significant challenges in sequential model editing-most notably issues of model forgetting and failure. To address these challenges, we introduce a new model editing method, namely Neuron-level Sequential Editing (NSE), tailored for supporting sequential model editing. Specifically, we optimize the target layer’s hidden states using the model’s original weights to prevent model failure. Furthermore, we iteratively select neurons in multiple layers for editing based on their activation values to mitigate model forgetting. Our empirical experiments demonstrate that NSE significantly outperforms current modifying parameters model editing methods, marking a substantial advancement in the field of sequential model editing. Our code is released on https://anonymous.4open.science/r/NSE-0A8D/.
LaMP-Val: Large Language Models Empower Personalized Valuation in Auction
Jie Sun
|
Tianyu Zhang
|
Houcheng Jiang
|
Kexin Huang
|
Xiang Shu
|
Zhibo Zhu
|
Lintao Ma
|
Xingyu Lu
|
Jun Zhou
|
Junkang Wu
|
Chi Luo
|
An Zhang
|
Jiancan Wu
|
Xiang Wang
Findings of the Association for Computational Linguistics: EMNLP 2025
Auctions are a vital economic mechanism used to determine the market value of goods or services through competitive bidding within a specific framework. However, much of the current research primarily focuses on the bidding algorithms used within auction mechanisms. This often neglects the potential benefits of incorporating individual users’ unique preferences into the valuation process. Our theoretical and empirical analysis demonstrates that valuation errors can significantly impact the overall utility. To bridge this gap, we propose a personalized valuation framework, namely Large Language Models-powered Personalized Valuation (LaMP-Val), which integrates Large Language Models to incorporate personalized semantic preference into users valuation process. LaMP-Val integrating three components: data, learning, and evaluation. The data component tackles the challenge of building a novel dataset specifically for LLMs fine-tuning in personalized valuation modeling. The learning component introduces a diversity template to enhance LLMs’ capacity for modeling fine-grained personal valuation patterns. The evaluation component establishes a closed-loop system where LLM-generated valuations interact with bidding strategies and auction. It proposes two novel metrics to quantify valuation precision and bidding intention accuracy in personalized scenarios. Extensive experiments show that LaMP-Val more accurately captures personalized values and achieves greater profits than baseline approaches.
Search
Fix author
Co-authors
- Xiang Wang 2
- Tianyu Zhang 2
- An Zhang 2
- Baolong Bi 1
- Junfeng Fang 1
- show all...