SubmissionNumber#=%=#14 FinalPaperTitle#=%=#BrainLlama at SemEval-2024 Task 6: Prompting Llama to detect hallucinations and related observable overgeneration mistakes ShortPaperTitle#=%=# NumberOfPages#=%=#6 CopyrightSigned#=%=#Marco Siino JobTitle#==# Organization#==#University of Catania Abstract#==#Participants in the SemEval-2024 Task 6 were tasked with executing binary classification aimed at discerning instances of fluent overgeneration hallucinations across two distinct setups: the model-aware and model-agnostic tracks. That is, participants must detect grammatically sound outputs which contain incorrect or unsupported semantic information, regardless of whether they had access to the model responsible for producing the output or not, within the model-aware and model-agnostic tracks. Two tracks were proposed for the task: a model-aware track, where organizers provided a checkpoint to a model publicly available on HuggingFace for every data point considered, and a model-agnostic track where the organizers do not. In this paper, we discuss the application of a Llama model to address both the tracks. Find the persuasive strategy that a meme employs from a hierarchy of twenty based just on its "textual content." Only a portion of the reward is awarded if the technique's ancestor node is chosen. This classification issue is multilabel hierarchical. Our approach reaches an accuracy of 0.62 on the agnostic track and of 0.67 on the aware track. Author{1}{Firstname}#=%=#Marco Author{1}{Lastname}#=%=#Siino Author{1}{Username}#=%=#marcosiino Author{1}{Email}#=%=#marco.siino@unipa.it Author{1}{Affiliation}#=%=#Università degli Studi di Catania ========== èéáğö