@inproceedings{zhou-etal-2024-clasp,
    title = "{CLASP}: Cross-modal Alignment Using Pre-trained Unimodal Models",
    author = "Zhou, Jianing  and
      Zeng, Ziheng  and
      Gong, Hongyu  and
      Bhat, Suma",
    editor = "Ku, Lun-Wei  and
      Martins, Andre  and
      Srikumar, Vivek",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2024",
    month = aug,
    year = "2024",
    address = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.findings-acl.684/",
    doi = "10.18653/v1/2024.findings-acl.684",
    pages = "11518--11531",
    abstract = "Recent advancements in joint speech-text pre-training have significantly advanced the processing of natural language. However, a key limitation is their reliance on parallel speech-text data, posing challenges due to data accessibility. Addressing this, our paper introduces an innovative framework for jointly performing speech and text processing without parallel corpora during pre-training but only downstream. Utilizing pre-trained unimodal models, we extract distinct representations for speech and text, aligning them effectively in a newly defined space using a multi-level contrastive learning mechanism. A unique swap reconstruction mechanism enhances the alignment and is followed by fusion via a multi-head mechanism, seamlessly merging modality-invariant and modality-specific representations. Testing for emotion recognition (SLU task) and idiom usage detection (NLU task) demonstrates robust performance, with commendable robustness to noise in text or speech data."
}Markdown (Informal)
[CLASP: Cross-modal Alignment Using Pre-trained Unimodal Models](https://preview.aclanthology.org/ingest-emnlp/2024.findings-acl.684/) (Zhou et al., Findings 2024)
ACL