David Suendermann-Oeft

Also published as: David Suendermann


2020

pdf
On the Utility of Audiovisual Dialog Technologies and Signal Analytics for Real-time Remote Monitoring of Depression Biomarkers
Michael Neumann | Oliver Roessler | David Suendermann-Oeft | Vikram Ramanarayanan
Proceedings of the First Workshop on Natural Language Processing for Medical Conversations

We investigate the utility of audiovisual dialog systems combined with speech and video analytics for real-time remote monitoring of depression at scale in uncontrolled environment settings. We collected audiovisual conversational data from participants who interacted with a cloud-based multimodal dialog system, and automatically extracted a large set of speech and vision metrics based on the rich existing literature of laboratory studies. We report on the efficacy of various audio and video metrics in differentiating people with mild, moderate and severe depression, and discuss the implications of these results for the deployment of such technologies in real-world neurological diagnosis and monitoring applications.

2018

pdf
From dictations to clinical reports using machine translation
Gregory Finley | Wael Salloum | Najmeh Sadoughi | Erik Edwards | Amanda Robinson | Nico Axtmann | Michael Brenndoerfer | Mark Miller | David Suendermann-Oeft
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)

A typical workflow to document clinical encounters entails dictating a summary, running speech recognition, and post-processing the resulting text into a formatted letter. Post-processing entails a host of transformations including punctuation restoration, truecasing, marking sections and headers, converting dates and numerical expressions, parsing lists, etc. In conventional implementations, most of these tasks are accomplished by individual modules. We introduce a novel holistic approach to post-processing that relies on machine callytranslation. We show how this technique outperforms an alternative conventional system—even learning to correct speech recognition errors during post-processing—while being much simpler to maintain.

pdf
An automated medical scribe for documenting clinical encounters
Gregory Finley | Erik Edwards | Amanda Robinson | Michael Brenndoerfer | Najmeh Sadoughi | James Fone | Nico Axtmann | Mark Miller | David Suendermann-Oeft
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations

A medical scribe is a clinical professional who charts patient–physician encounters in real time, relieving physicians of most of their administrative burden and substantially increasing productivity and job satisfaction. We present a complete implementation of an automated medical scribe. Our system can serve either as a scalable, standardized, and economical alternative to human scribes; or as an assistive tool for them, providing a first draft of a report along with a convenient means to modify it. This solution is, to our knowledge, the first automated scribe ever presented and relies upon multiple speech and language technologies, including speaker diarization, medical speech recognition, knowledge extraction, and natural language generation.

pdf
Leveraging Multimodal Dialog Technology for the Design of Automated and Interactive Student Agents for Teacher Training
David Pautler | Vikram Ramanarayanan | Kirby Cofino | Patrick Lange | David Suendermann-Oeft
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue

We present a paradigm for interactive teacher training that leverages multimodal dialog technology to puppeteer custom-designed embodied conversational agents (ECAs) in student roles. We used the open-source multimodal dialog system HALEF to implement a small-group classroom math discussion involving Venn diagrams where a human teacher candidate has to interact with two student ECAs whose actions are controlled by the dialog system. Such an automated paradigm has the potential to be extended and scaled to a wide range of interactive simulation scenarios in education, medicine, and business where group interaction training is essential.

2017

pdf
Deep Learning for Punctuation Restoration in Medical Reports
Wael Salloum | Greg Finley | Erik Edwards | Mark Miller | David Suendermann-Oeft
BioNLP 2017

In clinical dictation, speakers try to be as concise as possible to save time, often resulting in utterances without explicit punctuation commands. Since the end product of a dictated report, e.g. an out-patient letter, does require correct orthography, including exact punctuation, the latter need to be restored, preferably by automated means. This paper describes a method for punctuation restoration based on a state-of-the-art stack of NLP and machine learning techniques including B-RNNs with an attention mechanism and late fusion, as well as a feature extraction technique tailored to the processing of medical terminology using a novel vocabulary reduction model. To the best of our knowledge, the resulting performance is superior to that reported in prior art on similar tasks.

pdf
Automated Preamble Detection in Dictated Medical Reports
Wael Salloum | Greg Finley | Erik Edwards | Mark Miller | David Suendermann-Oeft
BioNLP 2017

Dictated medical reports very often feature a preamble containing metainformation about the report such as patient and physician names, location and name of the clinic, date of procedure, and so on. In the medical transcription process, the preamble is usually omitted from the final report, as it contains information already available in the electronic medical record. We present a method which is able to automatically identify preambles in medical dictations. The method makes use of state-of-the-art NLP techniques including word embeddings and Bi-LSTMs and achieves preamble detection performance superior to humans.

2016

pdf
LVCSR System on a Hybrid GPU-CPU Embedded Platform for Real-Time Dialog Applications
Alexei V. Ivanov | Patrick L. Lange | David Suendermann-Oeft
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue

2015

pdf
Automated Speech Recognition Technology for Dialogue Interaction with Non-Native Interlocutors
Alexei V. Ivanov | Vikram Ramanarayanan | David Suendermann-Oeft | Melissa Lopez | Keelan Evanini | Jidong Tao
Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf
A distributed cloud-based dialog system for conversational application development
Vikram Ramanarayanan | David Suendermann-Oeft | Alexei V. Ivanov | Keelan Evanini
Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue

2012

pdf
One Year of Contender: What Have We Learned about Assessing and Tuning Industrial Spoken Dialog Systems?
David Suendermann | Roberto Pieraccini
NAACL-HLT Workshop on Future directions and needs in the Spoken Dialog Community: Tools and Data (SDCTD 2012)

2010

pdf
How to Drink from a Fire Hose: One Person Can Annoscribe One Million Utterances in One Month
David Suendermann | Jackson Liscombe | Roberto Pieraccini
Proceedings of the SIGDIAL 2010 Conference

2009

pdf
A Handsome Set of Metrics to Measure Utterance Classification Performance in Spoken Dialog Systems
David Suendermann | Jackson Liscombe | Krishna Dayanidhi | Roberto Pieraccini
Proceedings of the SIGDIAL 2009 Conference