Abstract
Self-supervised learning has achieved impressive results in speech processing, but current models are computationally expensive, generating environmental concerns because of their high energy consumption. Therefore, we propose an efficient self-supervised approach to address high computational costs, using a single GPU during 24 to 48 hours of pretraining. The proposed approach combines linear, convolutional, and self-attention layers with several optimizations, including dynamic batching, flash attention, mixed-precision training, gradient accumulation, and acoustic feature extraction with input preprocessing. Computational cost estimations for our proposed model represent up to two orders of magnitude improvements in computational efficiency against existing speech models.- Anthology ID:
- 2024.findings-eacl.23
- Volume:
- Findings of the Association for Computational Linguistics: EACL 2024
- Month:
- March
- Year:
- 2024
- Address:
- St. Julian’s, Malta
- Editors:
- Yvette Graham, Matthew Purver
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 340–346
- Language:
- URL:
- https://aclanthology.org/2024.findings-eacl.23
- DOI:
- Cite (ACL):
- Luis Lugo and Valentin Vielzeuf. 2024. Towards efficient self-supervised representation learning in speech processing. In Findings of the Association for Computational Linguistics: EACL 2024, pages 340–346, St. Julian’s, Malta. Association for Computational Linguistics.
- Cite (Informal):
- Towards efficient self-supervised representation learning in speech processing (Lugo & Vielzeuf, Findings 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2024.findings-eacl.23.pdf