Mikhail Pukemo


2025

pdf bib
iai_MSU at SemEval-2025 Task-3: Mu-SHROOM, the Multilingual Shared-task on Hallucinations and Related Observable Overgeneration Mistakes in English
Mikhail Pukemo | Aleksandr Levykin | Dmitrii Melikhov | Gleb Skiba | Roman Ischenko | Konstantin Vorontsov
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)

This paper presents the submissions of the iai_MSU team for SemEval-2025 Task 3 – Mu-SHROOM, where we achieved first place in the English language. The task involves detecting hallucinations in model-generated text, which requires systems to verify claims against reliable sources.In this paper, we present our approach to hallucination detection, which employs a three-stage system. The first stage uses a retrieval-based (Lewis et al., 2021) to verify claims against external knowledge sources. The second stage applies the Self-Refine Prompting (Madaan et al., 2023) to improve detection accuracy by analyzing potential errors of the first stage. The third stage combines predictions from the first and second stages into an ensemble.Our system achieves state-of-the-art performance on the competition dataset, demonstrating the effectiveness of combining retrieval-augmented verification with Self-Refine Prompting. The code for the solutions is available on https://github.com/pansershrek/IAI_MSU.

2024

pdf bib
LomonosovMSU at SemEval-2024 Task 4: Comparing LLMs and embedder models to identifying propaganda techniques in the content of memes in English for subtasks No1, No2a, and No2b
Gleb Skiba | Mikhail Pukemo | Dmitry Melikhov | Konstantin Vorontsov
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)

This paper presents the solution of the LomonosovMSU team for the SemEval-2024 Task 4 “Multilingual Detection of Persuasion Techniques in Memes” competition for the English language task. During the task solving process, generative and BERT-like (training classifiers on top of embedder models) approaches were tested for subtask No1, as well as an BERT-like approach on top of multimodal embedder models for subtasks No2a/No2b. The models were trained using datasets provided by the competition organizers, enriched with filtered datasets from previous SemEval competitions. The following results were achieved: 18th place for subtask No1, 9th place for subtask No2a, and 11th place for subtask No2b.