Neuroplasticity and Corruption in Model Mechanisms: A Case Study Of Indirect Object Identification

Vishnu Kabir Chhabra, Ding Zhu, Mohammad Mahdi Khalili


Abstract
Previous research has shown that fine-tuning language models on general tasks enhance their underlying mechanisms. However, the impact of fine-tuning on poisoned data and the resulting changes in these mechanisms are poorly understood. This study investigates the changes in a model’s mechanisms during toxic fine-tuning and identifies the primary corruption mechanisms. We also analyze the changes after retraining a corrupted model on the original dataset and observe neuroplasticity behaviors, where the model relearns original mechanisms after fine-tuning the corrupted model. Our findings indicate that; (i) Underlying mechanisms are amplified across task-specific fine-tuning which can be generalized to longer epochs, (ii) Model corruption via toxic fine-tuning is localized to specific circuit components, (iii) Models exhibit neuroplasticity when retraining corrupted models on clean dataset, reforming the original model mechanisms.
Anthology ID:
2025.findings-naacl.170
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3099–3122
Language:
URL:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.170/
DOI:
Bibkey:
Cite (ACL):
Vishnu Kabir Chhabra, Ding Zhu, and Mohammad Mahdi Khalili. 2025. Neuroplasticity and Corruption in Model Mechanisms: A Case Study Of Indirect Object Identification. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 3099–3122, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Neuroplasticity and Corruption in Model Mechanisms: A Case Study Of Indirect Object Identification (Chhabra et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.170.pdf