NlpUned at SemEval-2025 Task 10: Beyond Training: A Taxonomy-Guided Approach to Role Classification Using LLMs

Alberto Caballero, Alvaro Rodrigo, Roberto Centeno


Abstract
The paper presents a taxonomy-guided approach to role classification in news articles using Large Language Models (LLMs). Instead of traditional model training, the system employs zero-shot and few-shot prompting strategies, leveraging structured taxonomies and contextual cues for classification. The study evaluates hierarchical and single-step classification approaches, finding that a unified, single-step model with contextual preprocessing achieves the best performance. The research underscores the importance of input structuring and classification strategy in optimizing LLM performance for real-world applications.
Anthology ID:
2025.semeval-1.42
Volume:
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Sara Rosenthal, Aiala Rosá, Debanjan Ghosh, Marcos Zampieri
Venues:
SemEval | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
296–301
Language:
URL:
https://preview.aclanthology.org/corrections-2025-08/2025.semeval-1.42/
DOI:
Bibkey:
Cite (ACL):
Alberto Caballero, Alvaro Rodrigo, and Roberto Centeno. 2025. NlpUned at SemEval-2025 Task 10: Beyond Training: A Taxonomy-Guided Approach to Role Classification Using LLMs. In Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025), pages 296–301, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
NlpUned at SemEval-2025 Task 10: Beyond Training: A Taxonomy-Guided Approach to Role Classification Using LLMs (Caballero et al., SemEval 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-08/2025.semeval-1.42.pdf