Ruhollah Ahmadi


2025

pdf bib
WordWiz at SemEval-2025 Task 10: Optimizing Narrative Extraction in Multilingual News via Fine-Tuned Language Models
Ruhollah Ahmadi | Hossein Zeinali
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)

This paper presents our WordWiz system for SemEval-2025 Task 10: Narrative Extraction. We employed a combination of targeted preprocessing techniques and instruction-tuned language models to generate concise, accurate narrative explanations across five languages. Our approach leverages an evidence refinement strategy that removes irrelevant sentences, improving signal-to-noise ratio in training examples. We fine-tuned Microsoft’s Phi-3.5 model using both Supervised Fine-Tuning (SFT). During inference, we implemented a multi-temperature sampling strategy that generates multiple candidate explanations and selects the optimal response using narrative relevance scoring. Notably, our smaller Phi-3.5 model consistently outperformed larger alternatives like Llama-3.1-8B across most languages. Our system achieved significant improvements over the baseline across all languages, with F1 scores ranging from 0.7486 (Portuguese) to 0.6839 (Bulgarian), demonstrating the effectiveness of evidence-guided instruction tuning for narrative extraction.