@inproceedings{gedeon-2025-speechee,
    title = "{S}peech{EE}@{XLLM}25: Retrieval-Enhanced Few-Shot Prompting for Speech Event Extraction",
    author = "Gedeon, M{\'a}t{\'e}",
    editor = "Fei, Hao  and
      Tu, Kewei  and
      Zhang, Yuhui  and
      Hu, Xiang  and
      Han, Wenjuan  and
      Jia, Zixia  and
      Zheng, Zilong  and
      Cao, Yixin  and
      Zhang, Meishan  and
      Lu, Wei  and
      Siddharth, N.  and
      {\O}vrelid, Lilja  and
      Xue, Nianwen  and
      Zhang, Yue",
    booktitle = "Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)",
    month = aug,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.xllm-1.32/",
    doi = "10.18653/v1/2025.xllm-1.32",
    pages = "351--361",
    ISBN = "979-8-89176-286-2",
    abstract = "Speech Event Extraction (SpeechEE) is a challenging task that lies at the intersection of Automatic Speech Recognition (ASR) and Natural Language Processing (NLP), requiring the identification of structured event information from spoken language. In this work, we present a modular, pipeline-based SpeechEE framework that integrates high-performance ASR with semantic search-enhanced prompting of Large Language Models (LLMs). Our system first classifies speech segments likely to contain events using a hybrid filtering mechanism including rule-based, BERT-based, and LLM-based models. It then employs fewshot LLM prompting, dynamically enriched via semantic similarity retrieval, to identify event triggers and extract corresponding arguments. We evaluate the pipeline using multiple LLMs{---}Llama3-8B, GPT-4o-mini, and o1-mini{---}highlighting significant performance gains with o1-mini, which achieves 63.3{\%} F1 on trigger classification and 27.8{\%} F1 on argument classification, outperforming prior benchmarks. Our results demonstrate that pipeline approaches, when empowered by retrievalaugmented LLMs, can rival or exceed end-toend systems while maintaining interpretability and modularity. This work provides practical insights into LLM-driven event extraction and opens pathways for future hybrid models combining textual and acoustic features"
}Markdown (Informal)
[SpeechEE@XLLM25: Retrieval-Enhanced Few-Shot Prompting for Speech Event Extraction](https://preview.aclanthology.org/ingest-emnlp/2025.xllm-1.32/) (Gedeon, XLLM 2025)
ACL