Comparison of Conventional Hybrid and CTC/Attention Decoders for Continuous Visual Speech Recognition

David Gimeno-Gómez, Carlos-D. Martínez-Hinarejos


Abstract
Thanks to the rise of deep learning and the availability of large-scale audio-visual databases, recent advances have been achieved in Visual Speech Recognition (VSR). Similar to other speech processing tasks, these end-to-end VSR systems are usually based on encoder-decoder architectures. While encoders are somewhat general, multiple decoding approaches have been explored, such as the conventional hybrid model based on Deep Neural Networks combined with Hidden Markov Models (DNN-HMM) or the Connectionist Temporal Classification (CTC) paradigm. However, there are languages and tasks in which data is scarce, and in this situation, there is not a clear comparison between different types of decoders. Therefore, we focused our study on how the conventional DNN-HMM decoder and its state-of-the-art CTC/Attention counterpart behave depending on the amount of data used for their estimation. We also analyzed to what extent our visual speech features were able to adapt to scenarios for which they were not explicitly trained, either considering a similar dataset or another collected for a different language. Results showed that the conventional paradigm reached recognition rates that improve the CTC/Attention model in data-scarcity scenarios along with a reduced training time and fewer parameters.
Anthology ID:
2024.lrec-main.321
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
3628–3638
Language:
URL:
https://aclanthology.org/2024.lrec-main.321
DOI:
Bibkey:
Cite (ACL):
David Gimeno-Gómez and Carlos-D. Martínez-Hinarejos. 2024. Comparison of Conventional Hybrid and CTC/Attention Decoders for Continuous Visual Speech Recognition. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 3628–3638, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Comparison of Conventional Hybrid and CTC/Attention Decoders for Continuous Visual Speech Recognition (Gimeno-Gómez & Martínez-Hinarejos, LREC-COLING 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2024.lrec-main.321.pdf