@inproceedings{ahmadi-zeinali-2025-wordwiz,
    title = "{W}ord{W}iz at {S}em{E}val-2025 Task 10: Optimizing Narrative Extraction in Multilingual News via Fine-Tuned Language Models",
    author = "Ahmadi, Ruhollah  and
      Zeinali, Hossein",
    editor = "Rosenthal, Sara  and
      Ros{\'a}, Aiala  and
      Ghosh, Debanjan  and
      Zampieri, Marcos",
    booktitle = "Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.semeval-1.170/",
    pages = "1276--1281",
    ISBN = "979-8-89176-273-2",
    abstract = "This paper presents our WordWiz system for SemEval-2025 Task 10: Narrative Extraction. We employed a combination of targeted preprocessing techniques and instruction-tuned language models to generate concise, accurate narrative explanations across five languages. Our approach leverages an evidence refinement strategy that removes irrelevant sentences, improving signal-to-noise ratio in training examples. We fine-tuned Microsoft{'}s Phi-3.5 model using both Supervised Fine-Tuning (SFT). During inference, we implemented a multi-temperature sampling strategy that generates multiple candidate explanations and selects the optimal response using narrative relevance scoring. Notably, our smaller Phi-3.5 model consistently outperformed larger alternatives like Llama-3.1-8B across most languages. Our system achieved significant improvements over the baseline across all languages, with F1 scores ranging from 0.7486 (Portuguese) to 0.6839 (Bulgarian), demonstrating the effectiveness of evidence-guided instruction tuning for narrative extraction."
}Markdown (Informal)
[WordWiz at SemEval-2025 Task 10: Optimizing Narrative Extraction in Multilingual News via Fine-Tuned Language Models](https://preview.aclanthology.org/ingest-emnlp/2025.semeval-1.170/) (Ahmadi & Zeinali, SemEval 2025)
ACL