CLASP: Cross-modal Alignment Using Pre-trained Unimodal Models

Jianing Zhou, Ziheng Zeng, Hongyu Gong, Suma Bhat


Abstract
Recent advancements in joint speech-text pre-training have significantly advanced the processing of natural language. However, a key limitation is their reliance on parallel speech-text data, posing challenges due to data accessibility. Addressing this, our paper introduces an innovative framework for jointly performing speech and text processing without parallel corpora during pre-training but only downstream. Utilizing pre-trained unimodal models, we extract distinct representations for speech and text, aligning them effectively in a newly defined space using a multi-level contrastive learning mechanism. A unique swap reconstruction mechanism enhances the alignment and is followed by fusion via a multi-head mechanism, seamlessly merging modality-invariant and modality-specific representations. Testing for emotion recognition (SLU task) and idiom usage detection (NLU task) demonstrates robust performance, with commendable robustness to noise in text or speech data.
Anthology ID:
2024.findings-acl.684
Volume:
Findings of the Association for Computational Linguistics: ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11518–11531
Language:
URL:
https://aclanthology.org/2024.findings-acl.684
DOI:
10.18653/v1/2024.findings-acl.684
Bibkey:
Cite (ACL):
Jianing Zhou, Ziheng Zeng, Hongyu Gong, and Suma Bhat. 2024. CLASP: Cross-modal Alignment Using Pre-trained Unimodal Models. In Findings of the Association for Computational Linguistics: ACL 2024, pages 11518–11531, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
CLASP: Cross-modal Alignment Using Pre-trained Unimodal Models (Zhou et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2024.findings-acl.684.pdf