Claudio Savelli
2026
Confabulations from ACL Publications (CAP): A Dataset for Scientific Hallucination Detection
Federica Gamba | Aman Sinha | Timothee Mickus | Raul Vazquez | Patanjali Bhamidipati | Claudio Savelli | Ahana Chattopadhyay | Laura A. Zanella | Yash Kankanampati | Binesh Arakkal Remesh | Aryan Ashok Chandramania | Rohit Agarwal | Chuyuan Li | Ioana Buhnila | Radhika Mamidi
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Federica Gamba | Aman Sinha | Timothee Mickus | Raul Vazquez | Patanjali Bhamidipati | Claudio Savelli | Ahana Chattopadhyay | Laura A. Zanella | Yash Kankanampati | Binesh Arakkal Remesh | Aryan Ashok Chandramania | Rohit Agarwal | Chuyuan Li | Ioana Buhnila | Radhika Mamidi
Proceedings of the Fifteenth Language Resources and Evaluation Conference
We introduce the CAP (Confabulations from ACL Publications) dataset, a multilingual resource for studying hallucinations in large language models (LLMs) within scientific text generation. CAP focuses on the scientific domain, where hallucinations can distort factual knowledge, as they frequently do. In this domain, however, the presence of specialized terminology, statistical reasoning, and context-dependent interpretations further exacerbates these distortions, particularly given LLMs’ lack of true comprehension, limited contextual understanding, and bias toward surface-level generalization. CAP operates in a cross-lingual setting covering five high-resource languages (English, French, Hindi, Italian, and Spanish) and four low-resource languages (Bengali, Gujarati, Malayalam, and Telugu). The dataset comprises 900 curated scientific questions and over 7,000 LLM-generated answers from 16 publicly available models, provided as question–answer pairs along with token sequences and corresponding logits. Each instance is annotated with a binary label indicating the presence of a scientific hallucination, denoted as a factuality error, and a fluency label, capturing issues in the linguistic quality or naturalness of the text. CAP is publicly released to facilitate advanced research on hallucination detection, multilingual evaluation of LLMs, and the development of more reliable scientific NLP systems.
FAME: Fictional Actors for Multilingual Erasure
Claudio Savelli | Moreno La Quatra | Alkis Koudounas | Flavio Giobergia
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Claudio Savelli | Moreno La Quatra | Alkis Koudounas | Flavio Giobergia
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Large Language Models trained on web-scale data raise concerns about privacy and the right to be forgotten. To address these issues, Machine Unlearning provides techniques to remove specific information from trained models without retraining from scratch. However, existing benchmarks for evaluating unlearning in LLMs face two major limitations: they focus only on English and support only entity-level forgetting (removing all information about a person). We introduce FAME (Fictional Actors for Multilingual Erasure), a synthetic benchmark for evaluating Machine Unlearning across five languages: English, French, German, Italian, and Spanish. FAME contains 1,000 fictional actor biographies and 20,000 question-answer pairs. Each biography includes information on 20 topics organized into structured categories (biography, career, achievements, personal information). This design enables both entity-level unlearning (i.e., forgetting entire identities) and instance-level unlearning (i.e., forgetting specific facts while retaining others). We provide two dataset splits to support these two different unlearning scenarios and enable systematic comparison of unlearning techniques across languages. Since FAME uses entirely fictional data, it ensures that the information was never encountered during model pretraining, allowing for a controlled evaluation of unlearning methods.
2025
MALTO at SemEval-2025 Task 4: Dual Teachers for Unlearning Sensitive Content in LLMs
Claudio Savelli | Evren Munis | Erfan Bayat | Andrea Grieco | Flavio Giobergia
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
Claudio Savelli | Evren Munis | Erfan Bayat | Andrea Grieco | Flavio Giobergia
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
Large language models (LLMs) may retain and reproduce sensitive information learned during training, posing significant privacy and ethical concerns. Once detected, this personal information should be deleted from the model. A naive answer could be to retrain these models from scratch when needed. However, this solution is unfeasible given the immense computational, economic, and environmental costs required to train these models. For this reason, Machine Unlearning (MU) has risen in recent years as an emerging field of research to efficiently delete specific information from a model’s knowledge. This paper presents our solution to the “Unlearning sensitive content from Large Language Models” shared task at SemEval-2025, which challenges researchers to develop effective LLM MU techniques. We adopt a Dual-Teacher framework that leverages a Competent and an Incompetent Teacher to erase unwanted information while selectively preserving model utility. Our approach adapts established computer vision unlearning methods to the sequential nature of language models through KL divergence minimization over next-token prediction probabilities. Our experimental results demonstrate that our method outperforms the state-of-the-art techniques.
MALTO at SemEval-2025 Task 3: Detecting Hallucinations in LLMs via Uncertainty Quantification and Larger Model Validation
Claudio Savelli | Alkis Koudounas | Flavio Giobergia
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
Claudio Savelli | Alkis Koudounas | Flavio Giobergia
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
Large language models (LLMs) often produce {textit{hallucinations}} —factually incorrect statements that appear highly persuasive. These errors pose risks in fields like healthcare, law, and journalism. This paper presents our approach to the Mu-SHROOM shared task at SemEval 2025, which challenges researchers to detect hallucination spans in LLM outputs. We introduce a new method that combines probability-based analysis with Natural Language Inference to evaluate hallucinations at the word level. Our technique aims to better align with human judgments while working independently of the underlying model. Our experimental results demonstrate the effectiveness of this method compared to existing baselines.
2024
MALTO at SemEval-2024 Task 6: Leveraging Synthetic Data for LLM Hallucination Detection
Federico Borra | Claudio Savelli | Giacomo Rosso | Alkis Koudounas | Flavio Giobergia
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
Federico Borra | Claudio Savelli | Giacomo Rosso | Alkis Koudounas | Flavio Giobergia
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
In Natural Language Generation (NLG), contemporary Large Language Models (LLMs) face several challenges, such as generating fluent yet inaccurate outputs and reliance on fluency-centric metrics. This often leads to neural networks exhibiting “hallucinations.” The SHROOM challenge focuses on automatically identifying these hallucinations in the generated text. To tackle these issues, we introduce two key components, a data augmentation pipeline incorporating LLM-assisted pseudo-labelling and sentence rephrasing, and a voting ensemble from three models pre-trained on Natural Language Inference (NLI) tasks and fine-tuned on diverse datasets.