David Sontag


2022

pdf
Large language models are few-shot clinical information extractors
Monica Agrawal | Stefan Hegselmann | Hunter Lang | Yoon Kim | David Sontag
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

A long-running goal of the clinical NLP community is the extraction of important variables trapped in clinical notes. However, roadblocks have included dataset shift from the general domain and a lack of public clinical corpora and annotations. In this work, we show that large language models, such as InstructGPT (Ouyang et al., 2022), perform well at zero- and few-shot information extraction from clinical text despite not being trained specifically for the clinical domain. Whereas text classification and generation performance have already been studied extensively in such models, here we additionally demonstrate how to leverage them to tackle a diverse set of NLP tasks which require more structured outputs, including span identification, token-level sequence classification, and relation extraction. Further, due to the dearth of available data to evaluate these systems, we introduce new datasets for benchmarking few-shot clinical information extraction based on a manual re-annotation of the CASI dataset (Moon et al., 2014) for new tasks. On the clinical extraction tasks we studied, the GPT-3 systems significantly outperform existing zero- and few-shot baselines.

2021

pdf
CLIP: A Dataset for Extracting Action Items for Physicians from Hospital Discharge Notes
James Mullenbach | Yada Pruksachatkun | Sean Adler | Jennifer Seale | Jordan Swartz | Greg McKelvey | Hui Dai | Yi Yang | David Sontag
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Continuity of care is crucial to ensuring positive health outcomes for patients discharged from an inpatient hospital setting, and improved information sharing can help. To share information, caregivers write discharge notes containing action items to share with patients and their future caregivers, but these action items are easily lost due to the lengthiness of the documents. In this work, we describe our creation of a dataset of clinical action items annotated over MIMIC-III, the largest publicly available dataset of real clinical notes. This dataset, which we call CLIP, is annotated by physicians and covers 718 documents representing 100K sentences. We describe the task of extracting the action items from these documents as multi-aspect extractive summarization, with each aspect representing a type of action to be taken. We evaluate several machine learning models on this task, and show that the best models exploit in-domain language model pre-training on 59K unannotated documents, and incorporate context from neighboring sentences. We also propose an approach to pre-training data selection that allows us to explore the trade-off between size and domain-specificity of pre-training datasets for this task.

2010

pdf bib
On Dual Decomposition and Linear Programming Relaxations for Natural Language Processing
Alexander M. Rush | David Sontag | Michael Collins | Tommi Jaakkola
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf
Dual Decomposition for Parsing with Non-Projective Head Automata
Terry Koo | Alexander M. Rush | Michael Collins | Tommi Jaakkola | David Sontag
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing