Lacuna Inc. at SemEval-2025 Task 4: LoRA-Enhanced Influence-Based Unlearning for LLMs

Aleksey Kudelya, Alexander Shirnin


Abstract
This paper describes LIBU (LoRA enhanced influence-based unlearning), an algorithm to solve the task of unlearning - removing specific knowledge from a large language model without retraining from scratch and compromising its overall utility (SemEval-2025 Task 4: Unlearning sensitive content from Large Language Models). The algorithm combines classical influence functions to remove the influence of thedata from the model and second-order optimization to stabilize the overall utility. Our experiments show that this lightweight approach is well applicable for unlearning LLMs in different kinds of task.
Anthology ID:
2025.semeval-1.201
Volume:
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Sara Rosenthal, Aiala Rosá, Debanjan Ghosh, Marcos Zampieri
Venues:
SemEval | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1528–1533
Language:
URL:
https://preview.aclanthology.org/corrections-2025-08/2025.semeval-1.201/
DOI:
Bibkey:
Cite (ACL):
Aleksey Kudelya and Alexander Shirnin. 2025. Lacuna Inc. at SemEval-2025 Task 4: LoRA-Enhanced Influence-Based Unlearning for LLMs. In Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025), pages 1528–1533, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Lacuna Inc. at SemEval-2025 Task 4: LoRA-Enhanced Influence-Based Unlearning for LLMs (Kudelya & Shirnin, SemEval 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-08/2025.semeval-1.201.pdf