Despite near-perfect results reported in the literature, the effectiveness of model editing in real-world applications remains unclear. To bridge this gap, we introduce QAEdit, a new benchmark aligned with widely used question answering (QA) datasets, and WILD, a task-agnostic evaluation framework designed to better reflect real-world usage of model editing. Our single editing experiments show that current editing methods perform substantially worse than previously reported (38.5% vs. 96.8%). We demonstrate that it stems from issues in the synthetic evaluation practices of prior work. Among them, the most severe is the use of teacher forcing during testing, which leaks both content and length of the ground truth, leading to overestimated performance. Furthermore, we simulate practical deployment by sequential editing, revealing that current approaches fail drastically with only 1000 edits. This work calls for a shift in model editing research toward rigorous evaluation and the development of robust, scalable methods that can reliably update knowledge in LLMs for real-world use.
Promoting positive mental health and well-being, especially in adolescents, is a critical yet underexplored area in natural language processing (NLP). Most existing NLP research focuses on clinical therapy or psychological counseling for the general population, which does not adequately address the preventative and growth-oriented needs of adolescents. In this paper, we introduce DeepWell-Adol, a domain-specific Chinese dialogue corpus grounded in positive psychology and coaching, designed to foster adolescents’ positive mental health and well-being. To balance the trade-offs between data quality, quantity, and scenario diversity, the corpus comprises two main components: human expert-written seed data (ensuring professional quality) and its mirrored expansion (automatically generated using a two-stage scenario-based augmentation framework). This approach enables large-scale data creation while maintaining domain relevance and reliability. Comprehensive evaluations demonstrate that the corpus meets general standards for psychological dialogue and emotional support, while also showing superior performance across multiple models in promoting positive psychological processes, character strengths, interpersonal relationships, and healthy behaviors. Moreover, the framework proposed for building and evaluating DeepWell-Adol offers a flexible and scalable method for developing domain-specific datasets. It significantly enhances automation and reduces development costs without compromising professional standards—an essential consideration in sensitive areas like adolescent and elderly mental health. We make our dataset publicly available.
Despite significant progress in model editing methods, their application in real-world scenarios remains challenging as they often cause large language models (LLMs) to collapse. Among them, ROME is particularly concerning, as it could disrupt LLMs with only a single edit. In this paper, we study the root causes of such collapse. Through extensive analysis, we identify two primary factors that contribute to the collapse: i) inconsistent handling of prefixed and unprefixed keys in the parameter update equation may result in very small denominators, causing excessively large parameter updates; ii) the subject of collapse cases is usually the first token, whose unprefixed key distribution significantly differs from the prefixed key distribution in autoregressive transformers, causing the aforementioned issue to materialize. To validate our findings, we propose a simple yet effective approach: uniformly using prefixed keys during editing phase and adding prefixes during testing phase to ensure the consistency between training and testing. The experimental results show that the proposed solution can prevent model collapse while maintaining the effectiveness of the edits.