Li-Wei Chen
2026
TG-ASR: Translation-Guided Learning with Parallel Gated Cross Attention for Low-Resource Automatic Speech Recognition
ChengYeh Yang | Chien-Chun Wang | Li-Wei Chen | Hung-Shin Lee | Hsin-Min Wang | Berlin Chen
Proceedings of the Fifteenth Language Resources and Evaluation Conference
ChengYeh Yang | Chien-Chun Wang | Li-Wei Chen | Hung-Shin Lee | Hsin-Min Wang | Berlin Chen
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Low-resource automatic speech recognition remains a critical challenge due to the scarcity of transcribed data for many languages.Taiwanese Hokkien exemplifies this problem as, although extensive speech content exists in television dramas and online videos, transcriptions are scarce and most available subtitles are in Mandarin.To address this gap, this paper presents TG-ASR for Taiwanese drama speech recognition, a translation-guided ASR framework that leverages multilingual translation embeddings to enhance recognition in low-resource conditions.The framework centers on the parallel gated cross-attention (PGCA) mechanism, which adaptively integrates embeddings from multiple auxiliary languages into the ASR decoder.This mechanism enables robust cross-linguistic semantic guidance while maintaining stable optimization and avoiding interference between languages.To support future research, we release YT-THDC, a 30-hour corpus of Taiwanese drama speech with aligned Mandarin subtitles and manually verified Taiwanese transcriptions.Extensive experiments and analysis identify which auxiliary languages most effectively improve Taiwanese ASR, achieving a 13.51% relative reduction in character error rate and demonstrating the potential of translation-guided learning for underrepresented languages in real-world scenarios.
2023
Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
Ta-Chung Chi | Ting-Han Fan | Li-Wei Chen | Alexander Rudnicky | Peter Ramadge
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Ta-Chung Chi | Ting-Han Fan | Li-Wei Chen | Alexander Rudnicky | Peter Ramadge
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
The use of positional embeddings in transformer language models is widely accepted. However, recent research has called into question the necessity of such embeddings. We further extend this inquiry by demonstrating that a randomly initialized and frozen transformer language model, devoid of positional embeddings, inherently encodes strong positional information through the shrinkage of self-attention variance. To quantify this variance, we derive the underlying distribution of each step within a transformer layer. Through empirical validation using a fully pretrained model, we show that the variance shrinkage effect still persists after extensive gradient updates. Our findings serve to justify the decision to discard positional embeddings and thus facilitate more efficient pretraining of transformer language models.