2023
pdf
abs
1-step Speech Understanding and Transcription Using CTC Loss
Karan Singla
|
Shahab Jalalv
|
Yeon-Jun Kim
|
Andrej Ljolje
|
Antonio Moreno Daniel
|
Srinivas Bangalore
|
Benjamin Stern
Proceedings of the 20th International Conference on Natural Language Processing (ICON)
Recent studies have made some progress in refining end-to-end (E2E) speech recognition encoders by applying Connectionist Temporal Classification (CTC) loss to enhance named entity recognition within transcriptions. However, these methods have been constrained by their exclusive use of the ASCII character set, allowing only a limited array of semantic labels. We propose 1SPU, a 1-step Speech Processing Unit which can recognize speech events (e.g: speaker change) or an NL event (Intent, Emotion) while also transcribing vocal content. It extends the E2E automatic speech recognition (ASR) system’s vocabulary by adding a set of unused placeholder symbols, conceptually akin to the <pad> tokens used in sequence modeling. These placeholders are then assigned to represent semantic events (in form of tags) and are integrated into the transcription process as distinct tokens. We demonstrate notable improvements on the SLUE benchmark and yields results that are on par with those for the SLURP dataset. Additionally, we provide a visual analysis of the system’s proficiency in accurately pinpointing meaningful tokens over time, illustrating the enhancement in transcription quality through the utilization of supplementary semantic tags.
pdf
Combining Pre trained Speech and Text Encoders for Continuous Spoken Language Processing
Karan Singla
|
Mahnoosh Mehrabani
|
Daniel Pressel
|
Ryan Price
|
Bhargav Srinivas Chinnari
|
Yeon-Jun Kim
|
Srinivas Bangalore
Proceedings of the 20th International Conference on Natural Language Processing (ICON)
pdf
abs
E2E Spoken Entity Extraction for Virtual Agents
Karan Singla
|
Yeon-Jun Kim
|
Srinivas Bangalore
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track
In human-computer conversations, extracting entities such as names, street addresses and email addresses from speech is a challenging task. In this paper, we study the impact of fine-tuning pre-trained speech encoders on extracting spoken entities in human-readable form directly from speech without the need for text transcription. We illustrate that such a direct approach optimizes the encoder to transcribe only the entity relevant portions of speech ignoring the superfluous portions such as carrier phrases, or spell name entities. In the context of dialog from an enterprise virtual agent, we demonstrate that the 1-step approach outperforms the typical 2-step approach which first generates lexical transcriptions followed by text-based entity extraction for identifying spoken entities.
2021
pdf
abs
A Hybrid Approach to Scalable and Robust Spoken Language Understanding in Enterprise Virtual Agents
Ryan Price
|
Mahnoosh Mehrabani
|
Narendra Gupta
|
Yeon-Jun Kim
|
Shahab Jalalvand
|
Minhua Chen
|
Yanjie Zhao
|
Srinivas Bangalore
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Papers
Spoken language understanding (SLU) extracts the intended mean- ing from a user utterance and is a critical component of conversational virtual agents. In enterprise virtual agents (EVAs), language understanding is substantially challenging. First, the users are infrequent callers who are unfamiliar with the expectations of a pre-designed conversation flow. Second, the users are paying customers of an enterprise who demand a reliable, consistent and efficient user experience when resolving their issues. In this work, we describe a general and robust framework for intent and entity extraction utilizing a hybrid of statistical and rule-based approaches. Our framework includes confidence modeling that incorporates information from all components in the SLU pipeline, a critical addition for EVAs to en- sure accuracy. Our focus is on creating accurate and scalable SLU that can be deployed rapidly for a large class of EVA applications with little need for human intervention.
2012
pdf
abs
Building Text-To-Speech Voices in the Cloud
Alistair Conkie
|
Thomas Okken
|
Yeon-Jun Kim
|
Giuseppe Di Fabbrizio
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
The AT&T VoiceBuilder provides a new tool to researchers and practitioners who want to have their voices synthesized by a high-quality commercial-grade text-to-speech system without the need to install, configure, or manage speech processing software and equipment. It is implemented as a web service on the AT&T Speech Mashup Portal.The system records and validates users' utterances, processes them to build a synthetic voice and provides a web service API to make the voice available to real-time applications through a scalable cloud-based processing platform. All the procedures are automated to avoid human intervention. We present experimental comparisons of voices built using the system.