Yukiya Hono
2024
Integrating Pre-Trained Speech and Language Models for End-to-End Speech Recognition
Yukiya Hono
|
Koh Mitsuda
|
Tianyu Zhao
|
Kentaro Mitsui
|
Toshiaki Wakatsuki
|
Kei Sawada
Findings of the Association for Computational Linguistics ACL 2024
Advances in machine learning have made it possible to perform various text and speech processing tasks, such as automatic speech recognition (ASR), in an end-to-end (E2E) manner. E2E approaches utilizing pre-trained models are gaining attention for conserving training data and resources. However, most of their applications in ASR involve only one of either a pre-trained speech or a language model. This paper proposes integrating a pre-trained speech representation model and a large language model (LLM) for E2E ASR. The proposed model enables the optimization of the entire ASR process, including acoustic feature extraction and acoustic and language modeling, by combining pre-trained models with a bridge network and also enables the application of remarkable developments in LLM utilization, such as parameter-efficient domain adaptation and inference optimization. Experimental results demonstrate that the proposed model achieves a performance comparable to that of modern E2E ASR models by utilizing powerful pre-training models with the proposed integrated approach.
Release of Pre-Trained Models for the Japanese Language
Kei Sawada
|
Tianyu Zhao
|
Makoto Shing
|
Kentaro Mitsui
|
Akio Kaga
|
Yukiya Hono
|
Toshiaki Wakatsuki
|
Koh Mitsuda
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
AI democratization aims to create a world in which the average person can utilize AI techniques. To achieve this goal, numerous research institutes have attempted to make their results accessible to the public. In particular, large pre-trained models trained on large-scale data have shown unprecedented potential, and their release has had a significant impact. However, most of the released models specialize in the English language, and thus, AI democratization in non-English-speaking communities is lagging significantly. To reduce this gap in AI access, we released Generative Pre-trained Transformer (GPT), Contrastive Language and Image Pre-training (CLIP), Stable Diffusion, and Hidden-unit Bidirectional Encoder Representations from Transformers (HuBERT) pre-trained in Japanese. By providing these models, users can freely interface with AI that aligns with Japanese cultural values and ensures the identity of Japanese culture, thus enhancing the democratization of AI. Additionally, experiments showed that pre-trained models specialized for Japanese can efficiently achieve high performance in Japanese tasks.
Search
Co-authors
- Koh Mitsuda 2
- Tianyu Zhao 2
- Kentaro Mitsui 2
- Toshiaki Wakatsuki 2
- Kei Sawada 2
- show all...