Usman Shahid
2022
FPI: Failure Point Isolation in Large-scale Conversational Assistants
Rinat Khaziev
|
Usman Shahid
|
Tobias Röding
|
Rakesh Chada
|
Emir Kapanci
|
Pradeep Natarajan
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track
Large-scale conversational assistants such as Cortana, Alexa, Google Assistant and Siri process requests through a series of modules for wake word detection, speech recognition, language understanding and response generation. An error in one of these modules can cascade through the system. Given the large traffic volumes in these assistants, it is infeasible to manually analyze the data, identify requests with processing errors and isolate the source of error. We present a machine learning system to address this challenge. First, we embed the incoming request and context, such as system response and subsequent turns, using pre-trained transformer models. Then, we combine these embeddings with encodings of additional metadata features (such as confidence scores from different modules in the online system) using a “mixing-encoder” to output the failure point predictions. Our system obtains 92.2% of human performance on this task while scaling to analyze the entire traffic in 8 different languages of a large-scale conversational assistant. We present detailed ablation studies analyzing the impact of different modeling choices.
2020
Detecting and understanding moral biases in news
Usman Shahid
|
Barbara Di Eugenio
|
Andrew Rojecki
|
Elena Zheleva
Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events
We describe work in progress on detecting and understanding the moral biases of news sources by combining framing theory with natural language processing. First we draw connections between issue-specific frames and moral frames that apply to all issues. Then we analyze the connection between moral frame presence and news source political leaning. We develop and test a simple classification model for detecting the presence of a moral frame, highlighting the need for more sophisticated models. We also discuss some of the annotation and frame detection challenges that can inform future research in this area.
Search
Co-authors
- Rinat Khaziev 1
- Tobias Röding 1
- Rakesh Chada 1
- Emir Kapanci 1
- Pradeep Natarajan 1
- show all...