DyPCL: Dynamic Phoneme-level Contrastive Learning for Dysarthric Speech Recognition

Wonjun Lee, Solee Im, Heejin Do, Yunsu Kim, Jungseul Ok, Gary Lee


Abstract
Dysarthric speech recognition often suffers from performance degradation due to the intrinsic diversity of dysarthric severity and extrinsic disparity from normal speech. To bridge these gaps, we propose a Dynamic Phoneme-level Contrastive Learning (DyPCL) method, which leads to obtaining invariant representations across diverse speakers. We decompose the speech utterance into phoneme segments for phoneme-level contrastive learning, leveraging dynamic connectionist temporal classification alignment. Unlike prior studies focusing on utterance-level embeddings, our granular learning allows discrimination of subtle parts of speech. In addition, we introduce dynamic curriculum learning, which progressively transitions from easy negative samples to difficult-to-distinguishable negative samples based on phonetic similarity of phoneme. Our approach to training by difficulty levels alleviates the inherent variability of speakers, better identifying challenging speeches. Evaluated on the UASpeech dataset, DyPCL outperforms baseline models, achieving an average 22.10% relative reduction in word error rate (WER) across the overall dysarthria group.
Anthology ID:
2025.naacl-long.240
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4701–4712
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.240/
DOI:
Bibkey:
Cite (ACL):
Wonjun Lee, Solee Im, Heejin Do, Yunsu Kim, Jungseul Ok, and Gary Lee. 2025. DyPCL: Dynamic Phoneme-level Contrastive Learning for Dysarthric Speech Recognition. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 4701–4712, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
DyPCL: Dynamic Phoneme-level Contrastive Learning for Dysarthric Speech Recognition (Lee et al., NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.240.pdf