Brian Delaney


2020

pdf
Generating Medical Reports from Patient-Doctor Conversations Using Sequence-to-Sequence Models
Seppo Enarvi | Marilisa Amoia | Miguel Del-Agua Teba | Brian Delaney | Frank Diehl | Stefan Hahn | Kristina Harris | Liam McGrath | Yue Pan | Joel Pinto | Luca Rubini | Miguel Ruiz | Gagandeep Singh | Fabian Stemmer | Weiyi Sun | Paul Vozila | Thomas Lin | Ranjani Ramamurthy
Proceedings of the First Workshop on Natural Language Processing for Medical Conversations

We discuss automatic creation of medical reports from ASR-generated patient-doctor conversational transcripts using an end-to-end neural summarization approach. We explore both recurrent neural network (RNN) and Transformer-based sequence-to-sequence architectures for summarizing medical conversations. We have incorporated enhancements to these architectures, such as the pointer-generator network that facilitates copying parts of the conversations to the reports, and a hierarchical RNN encoder that makes RNN training three times faster with long inputs. A comparison of the relative improvements from the different model architectures over an oracle extractive baseline is provided on a dataset of 800k orthopedic encounters. Consistent with observations in literature for machine translation and related tasks, we find the Transformer models outperform RNN in accuracy, while taking less than half the time to train. Significantly large wins over a strong oracle baseline indicate that sequence-to-sequence modeling is a promising approach for automatic generation of medical reports, in the presence of data at scale.

2009

pdf
The MIT-LL/AFRL IWSLT-2009 MT system
Wade Shen | Brian Delaney | A. Ryan Aminzadeh | Tim Anderson | Ray Slyh
Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign

This paper describes the MIT-LL/AFRL statistical MT system and the improvements that were developed during the IWSLT 2009 evaluation campaign. As part of these efforts, we experimented with a number of extensions to the standard phrase-based model that improve performance on the Arabic and Turkish to English translation tasks. We discuss the architecture of the MIT-LL/AFRL MT system, improvements over our 2008 system, and experiments we ran during the IWSLT-2009 evaluation. Specifically, we focus on 1) Cross-domain translation using MAP adaptation and unsupervised training, 2) Turkish morphological processing and translation, 3) improved Arabic morphology for MT preprocessing, and 4) system combination methods for machine translation.

2008

pdf
The MIT-LL/AFRL IWSLT-2008 MT system.
Wade Shen | Brian Delaney | Tim Anderson | Ray Slyh
Proceedings of the 5th International Workshop on Spoken Language Translation: Evaluation Campaign

This paper describes the MIT-LL/AFRL statistical MT system and the improvements that were developed during the IWSLT 2008 evaluation campaign. As part of these efforts, we experimented with a number of extensions to the standard phrase-based model that improve performance for both text and speech-based translation on Chinese and Arabic translation tasks. We discuss the architecture of the MIT-LL/AFRL MT system, improvements over our 2007 system, and experiments we ran during the IWSLT-2008 evaluation. Specifically, we focus on 1) novel segmentation models for phrase-based MT, 2) improved lattice and confusion network decoding of speech input, 3) improved Arabic morphology for MT preprocessing, and 4) system combination methods for machine translation.

2007

pdf
The MIT-LL/AFRL IWSLT-2007 MT system
Wade Shen | Brian Delaney | Tim Anderson | Ray Slyh
Proceedings of the Fourth International Workshop on Spoken Language Translation

The MIT-LL/AFRL MT system implements a standard phrase-based, statistical translation model. It incorporates a number of extensions that improve performance for speech-based translation. During this evaluation our efforts focused on the rapid porting of our SMT system to a new language (Arabic) and novel approaches to translation from speech input. This paper discusses the architecture of the MIT-LL/AFRL MT system, improvements over our 2006 system, and experiments we ran during the IWSLT-2007 evaluation. Specifically, we focus on 1) experiments comparing the performance of confusion network decoding and direct lattice decoding techniques for machine translation of speech, 2) the application of lightweight morphology for Arabic MT preprocessing and 3) improved confusion network decoding.

2006

pdf
Toward an Interagency Language Roundtable Based Assessment of Speech-to-Speech Translation Capabilities
Douglas Jones | Timothy Anderson | Sabine Atwell | Brian Delaney | James Dirgin | Michael Emots | Neil Granoein | Martha Herzog | Timothy Hunter | Sargon Jabri | Wade Shen | Jurgen Sottung
Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers

We present observations from three exercises designed to map the effective listening and speaking skills of an operator of a speech-to-speech translation system (S2S) to the Interagency Language Roundtable (ILR) scale. Such a mapping is non-trivial, but will be useful for government and military decision makers in managing expectations of S2S technology. We observed domain-dependent S2S capabilities in the ILR range of Level 0+ to Level 1, and interactive text-based machine translation in the Level 3 range.

pdf
The MIT-LL/AFRL IWSLT-2006 MT system
Wade Shen | Brian Delaney | Tim Anderson
Proceedings of the Third International Workshop on Spoken Language Translation: Evaluation Campaign

pdf
An efficient graph search decoder for phrase-based statistical machine translation
Brian Delaney | Wade Shen | Timothy Anderson
Proceedings of the Third International Workshop on Spoken Language Translation: Papers

2005

pdf
The MIT-LL/AFRL MT System
Wade Shen | Brian Delaney | Tim Anderson
Proceedings of the Second International Workshop on Spoken Language Translation