SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering

Tianyu Yang, Yiyang Nan, Lisen Dai, Zhenwen Liang, Yapeng Tian, Xiangliang Zhang


Abstract
Audio-Visual Question Answering (AVQA) is a challenging task that involves answering questions based on both auditory and visual information in videos. A significant challenge is interpreting complex multi-modal scenes, which include both visual objects and sound sources, and connecting them to the given question. In this paper, we introduce the Source-aware Semantic Representation Network (SaSR-Net), a novel model designed for AVQA. SaSR-Net utilizes source-wise learnable tokens to efficiently capture and align audio-visual elements with the corresponding question. It streamlines the fusion of audio and visual information using spatial and temporal attention mechanisms to identify answers in multi-modal scenes. Extensive experiments on the Music-AVQA and AVQA-Yang datasets show that SaSR-Net outperforms state-of-the-art AVQA methods. We will release our source code and pre-trained models.
Anthology ID:
2024.findings-emnlp.933
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15894–15904
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-emnlp.933/
DOI:
10.18653/v1/2024.findings-emnlp.933
Bibkey:
Cite (ACL):
Tianyu Yang, Yiyang Nan, Lisen Dai, Zhenwen Liang, Yapeng Tian, and Xiangliang Zhang. 2024. SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 15894–15904, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering (Yang et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-emnlp.933.pdf
Software:
 2024.findings-emnlp.933.software.zip
Data:
 2024.findings-emnlp.933.data.zip