SubmissionNumber#=%=#43 FinalPaperTitle#=%=#DeepPavlov at SemEval-2024 Task 6: Detection of Hallucinations and Overgeneration Mistakes with an Ensemble of Transformer-based Models ShortPaperTitle#=%=# NumberOfPages#=%=#5 CopyrightSigned#=%=#Ivan Maksimov JobTitle#==# Organization#==#MIPT Abstract#==#The inclination of large language models (LLMs) to produce mistaken assertions, known as hallucinations, can be problematic. These hallucinations could potentially be harmful since sporadic factual inaccuracies within the generated text might be concealed by the overall coherence of the content, making it immensely challenging for users to identify them. The goal of the SHROOM shared-task is to detect grammatically sound outputs that contain incorrect or unsupported semantic information. Although there are a lot of existing hallucination detectors in generated AI content, we found out that pretrained Natural Language Inference (NLI) models yet exhibit success in detecting hallucinations. Moreover their ensemble outperforms more complicated models. Author{1}{Firstname}#=%=#Ivan Vasil'yevich Author{1}{Lastname}#=%=#Maksimov Author{1}{Username}#=%=#ivankud Author{1}{Email}#=%=#ivan.kudashkin@gmail.com Author{1}{Affiliation}#=%=#Moscow Institute of Physics and Technology Author{2}{Firstname}#=%=#Vasily Author{2}{Lastname}#=%=#Konovalov Author{2}{Username}#=%=#vaskonov Author{2}{Email}#=%=#vaskoncv@gmail.com Author{2}{Affiliation}#=%=#MIPT Author{3}{Firstname}#=%=#Andrei Vladimirovich Author{3}{Lastname}#=%=#Glinskii Author{3}{Username}#=%=#avglinsky Author{3}{Email}#=%=#avglinsky@yandex.ru Author{3}{Affiliation}#=%=#MIPT ========== èéáğö