Michele Maggini


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2024

pdf bib
Leveraging Advanced Prompting Strategies in LLaMA3-8B for Enhanced Hyperpartisan News Detection
Michele Maggini | Pablo Gamallo Otero
Proceedings of the Tenth Italian Conference on Computational Linguistics (CLiC-it 2024)

This paper explores advanced prompting strategies for hyperpartisan news detection using the LLaMA3-8b-Instruct model, an open-source LLM developed by Meta AI. We evaluate zero-shot, few-shot, and Chain-of-Thought (CoT) techniques on two datasets: SemEval-2019 Task 4 and a headline-specific corpus. Collaborating with a political science expert, we incorporate domain-specific knowledge and structured reasoning steps into our prompts, particularly for the CoT approach. Our findings reveal that zero-shot prompting, especially with general prompts, consistently outperforms other techniques across both datasets. This unexpected result challenges assumptions about the superiority of few-shot and CoT methods in specialized tasks. We discuss the implications of these findings for ICL in political text analysis and suggest directions for future research in leveraging large language models for nuanced content classification tasks.