Jinghan Yang
2025
Scaling Under-Resourced TTS: A Data-Optimized Framework with Advanced Acoustic Modeling for Thai
Yizhong Geng
|
Jizhuo Xu
|
Zeyu Liang
|
Jinghan Yang
|
Xiaoyi Shi
|
Xiaoyu Shen
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)
Text-to-speech (TTS) technology has achieved impressive results for widely spoken languages, yet many under-resourced languages remain challenged by limited data and linguistic complexities. In this paper, we present a novel methodology that integrates a data-optimized framework with an advanced acoustic model to build high-quality TTS systems for low-resource scenarios. We demonstrate the effectiveness of our approach using Thai as an illustrative case, where intricate phonetic rules and sparse resources are effectively addressed. Our method enables zero-shot voice cloning and improved performance across diverse client applications, ranging from finance to healthcare, education, and law. Extensive evaluations—both subjective and objective—confirm that our model meets state-of-the-art standards, offering a scalable solution for TTS production in data-limited settings, with significant implications for broader industry adoption and multilingual accessibility. All demos are available in https://luoji.cn/static/thai/demo.html.
2024
Relabeling Minimal Training Subset to Flip a Prediction
Jinghan Yang
|
Linjie Xu
|
Lequan Yu
Findings of the Association for Computational Linguistics: EACL 2024
2023
How Many and Which Training Points Would Need to be Removed to Flip this Prediction?
Jinghan Yang
|
Sarthak Jain
|
Byron C. Wallace
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
We consider the problem of identifying a minimal subset of training data 𝒮t such that if the instances comprising 𝒮t had been removed prior to training, the categorization of a given test point xt would have been different.Identifying such a set may be of interest for a few reasons.First, the cardinality of 𝒮t provides a measure of robustness (if |𝒮t| is small for xt, we might be less confident in the corresponding prediction), which we show is correlated with but complementary to predicted probabilities.Second, interrogation of 𝒮t may provide a novel mechanism for contesting a particular model prediction: If one can make the case that the points in 𝒮t are wrongly labeled or irrelevant, this may argue for overturning the associated prediction. Identifying 𝒮t via brute-force is intractable.We propose comparatively fast approximation methods to find 𝒮t based on influence functions, and find that—for simple convex text classification models—these approaches can often successfully identify relatively small sets of training examples which, if removed, would flip the prediction.
Search
Fix author
Co-authors
- Yizhong Geng 1
- Sarthak Jain 1
- Zeyu Liang 1
- Xiaoyu Shen 1
- Xiaoyi Shi 1
- show all...