JU-CSE-NLP’25 at SemEval-2025 Task 4: Learning to Unlearn LLMs

Arkajyoti Naskar, Dipankar Das, Sivaji Bandyopadhyay


Abstract
Large Language Models (LLMs) have achieved enormous success recently due to their ability to understand and solve various non-trivial tasks in natural language. However, they have been shown to memorize their training data which, among other concerns, increases the risk of the model regurgitating creative or private content, potentially leading to legal issues for the model developer and/or vendors. Such issues are often discovered post-model training during testing or red teaming. While unlearning has been studied for some time in classification problems, it is still a relatively underdeveloped area of study in LLM research since the latter operates in a potentially unbounded output label space. Specifically, robust evaluation frameworks are lacking to assess the accuracy of these unlearning strategies. In this challenge, we aim to bridge this gap by developing a comprehensive evaluation challenge for unlearning sensitive datasets in LLMs.
Anthology ID:
2025.semeval-1.267
Volume:
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Sara Rosenthal, Aiala Rosá, Debanjan Ghosh, Marcos Zampieri
Venues:
SemEval | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2059–2064
Language:
URL:
https://preview.aclanthology.org/transition-to-people-yaml/2025.semeval-1.267/
DOI:
Bibkey:
Cite (ACL):
Arkajyoti Naskar, Dipankar Das, and Sivaji Bandyopadhyay. 2025. JU-CSE-NLP’25 at SemEval-2025 Task 4: Learning to Unlearn LLMs. In Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025), pages 2059–2064, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
JU-CSE-NLP’25 at SemEval-2025 Task 4: Learning to Unlearn LLMs (Naskar et al., SemEval 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/transition-to-people-yaml/2025.semeval-1.267.pdf