Gibberish is All You Need for Membership Inference Detection in Contrastive Language-Audio Pretraining

Ruoxi Cheng, Yizhong Ding, Shuirong Cao, Zhiqiang Wang, Shitong Shao


Abstract
Audio can disclose PII, particularly when combined with related text data. Therefore, it is essential to develop tools to detect privacy leakage in Contrastive Language-Audio Pretraining(CLAP). Existing MIAs need audio as input, risking exposure of voiceprint and requiring costly shadow models. We first propose PRMID, a membership inference detector based probability ranking given by CLAP, which does not require training shadow models but still requires both audio and text of the individual as input. To address these limitations, we then propose USMID, a textual unimodal speaker-level membership inference detector, querying the target model using only text data. We randomly generate textual gibberish that are clearly not in training dataset. Then we extract feature vectors from these texts using the CLAP model and train a set of anomaly detectors on them. During inference, the feature vector of each test text is input into the anomaly detector to determine if the speaker is in the training set (anomalous) or not (normal). If available, USMID can further enhance detection by integrating real audio of the tested speaker. Extensive experiments on various CLAP model architectures and datasets demonstrate that USMID outperforms baseline methods using only text data.
Anthology ID:
2025.trustnlp-main.2
Volume:
Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025)
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Trista Cao, Anubrata Das, Tharindu Kumarage, Yixin Wan, Satyapriya Krishna, Ninareh Mehrabi, Jwala Dhamala, Anil Ramakrishna, Aram Galystan, Anoop Kumar, Rahul Gupta, Kai-Wei Chang
Venues:
TrustNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13–22
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.trustnlp-main.2/
DOI:
Bibkey:
Cite (ACL):
Ruoxi Cheng, Yizhong Ding, Shuirong Cao, Zhiqiang Wang, and Shitong Shao. 2025. Gibberish is All You Need for Membership Inference Detection in Contrastive Language-Audio Pretraining. In Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025), pages 13–22, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Gibberish is All You Need for Membership Inference Detection in Contrastive Language-Audio Pretraining (Cheng et al., TrustNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.trustnlp-main.2.pdf