Do Slides Help? Multi-modal Context for Automatic Transcription of Conference Talks

Supriti Sinhamahapatra, Jan Niehues


Abstract
State-of-the-art (SOTA) Automatic Speech Recognition (ASR) systems primarily rely on acoustic information while disregarding additional multi-modal context. However, visual information are essential in disambiguation and adaptation. While most work focus on speaker images to handle noise conditions, this work also focuses on integrating presentation slides for the use cases of scientific presentation.In a first step, we create a benchmark for multi-modal presentation including an automatic analysis of transcribing domain-specific terminology. Next, we explore methods for augmenting speech models with multi-modal information. We mitigate the lack of datasets with accompanying slides by a suitable approach of data augmentation.Finally, we train a model using the augmented dataset, resulting in a relative reduction in word error rate of approximately 34%, across all words and 35%, for domain-specific terms compared to the baseline model.
Anthology ID:
2025.emnlp-main.814
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16111–16121
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.814/
DOI:
Bibkey:
Cite (ACL):
Supriti Sinhamahapatra and Jan Niehues. 2025. Do Slides Help? Multi-modal Context for Automatic Transcription of Conference Talks. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 16111–16121, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Do Slides Help? Multi-modal Context for Automatic Transcription of Conference Talks (Sinhamahapatra & Niehues, EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.814.pdf
Checklist:
 2025.emnlp-main.814.checklist.pdf