Exploring the Potential of Multimodal LLM with Knowledge-Intensive Multimodal ASR

Minghan Wang, Yuxia Wang, Thuy-Trang Vu, Ehsan Shareghi, Reza Haf


Abstract
Recent advancements in multimodal large language models (MLLMs) have made significant progress in integrating information across various modalities, yet real-world applications in educational and scientific domains remain challenging. This paper introduces the Multimodal Scientific ASR (MS-ASR) task, which focuses on transcribing scientific conference videos by leveraging visual information from slides to enhance the accuracy of technical terminologies. Realized that traditional metrics like WER fall short in assessing performance accurately, prompting the proposal of severity-aware WER (SWER) that considers the content type and severity of ASR errors. We propose the Scientific Vision Augmented ASR (SciVASR) framework as a baseline method, enabling MLLMs to improve transcript quality through post-editing. Evaluations of state-of-the-art MLLMs, including GPT-4o, show a 45% improvement over speech-only baselines, highlighting the importance of multimodal information integration.
Anthology ID:
2024.findings-emnlp.776
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13274–13288
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.776
DOI:
10.18653/v1/2024.findings-emnlp.776
Bibkey:
Cite (ACL):
Minghan Wang, Yuxia Wang, Thuy-Trang Vu, Ehsan Shareghi, and Reza Haf. 2024. Exploring the Potential of Multimodal LLM with Knowledge-Intensive Multimodal ASR. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 13274–13288, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Exploring the Potential of Multimodal LLM with Knowledge-Intensive Multimodal ASR (Wang et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2024.findings-emnlp.776.pdf