Seppo Enarvi
2020
Generating Medical Reports from Patient-Doctor Conversations Using Sequence-to-Sequence Models
Seppo Enarvi
|
Marilisa Amoia
|
Miguel Del-Agua Teba
|
Brian Delaney
|
Frank Diehl
|
Stefan Hahn
|
Kristina Harris
|
Liam McGrath
|
Yue Pan
|
Joel Pinto
|
Luca Rubini
|
Miguel Ruiz
|
Gagandeep Singh
|
Fabian Stemmer
|
Weiyi Sun
|
Paul Vozila
|
Thomas Lin
|
Ranjani Ramamurthy
Proceedings of the First Workshop on Natural Language Processing for Medical Conversations
We discuss automatic creation of medical reports from ASR-generated patient-doctor conversational transcripts using an end-to-end neural summarization approach. We explore both recurrent neural network (RNN) and Transformer-based sequence-to-sequence architectures for summarizing medical conversations. We have incorporated enhancements to these architectures, such as the pointer-generator network that facilitates copying parts of the conversations to the reports, and a hierarchical RNN encoder that makes RNN training three times faster with long inputs. A comparison of the relative improvements from the different model architectures over an oracle extractive baseline is provided on a dataset of 800k orthopedic encounters. Consistent with observations in literature for machine translation and related tasks, we find the Transformer models outperform RNN in accuracy, while taking less than half the time to train. Significantly large wins over a strong oracle baseline indicate that sequence-to-sequence modeling is a promising approach for automatic generation of medical reports, in the presence of data at scale.
2013
Studies on training text selection for conversational Finnish language modeling
Seppo Enarvi
|
Mikko Kurimo
Proceedings of the 10th International Workshop on Spoken Language Translation: Papers
Current ASR and MT systems do not operate on conversational Finnish, because training data for colloquial Finnish has not been available. Although speech recognition performance on literary Finnish is already quite good, those systems have very poor baseline performance in conversational speech. Text data for relevant vocabulary and language models can be collected from the Internet, but web data is very noisy and most of it is not helpful for learning good models. Finnish language is highly agglutinative, and written phonetically. Even phonetic reductions and sandhi are often written down in informal discussions. This increases vocabulary size dramatically and causes word-based selection methods to fail. Our selection method explicitly optimizes the perplexity of a subword language model on the development data, and requires only very limited amount of speech transcripts as development data. The language models have been evaluated for speech recognition using a new data set consisting of generic colloquial Finnish.
Search
Co-authors
- Mikko Kurimo 1
- Marilisa Amoia 1
- Miguel Del-Agua Teba 1
- Brian Delaney 1
- Frank Diehl 1
- show all...