Pranav Rajpurkar
2020
Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT
Akshay Smit
|
Saahil Jain
|
Pranav Rajpurkar
|
Anuj Pareek
|
Andrew Ng
|
Matthew Lungren
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
The extraction of labels from radiology text reports enables large-scale training of medical imaging models. Existing approaches to report labeling typically rely either on sophisticated feature engineering based on medical domain knowledge or manual annotations by experts. In this work, we introduce a BERT-based approach to medical image report labeling that exploits both the scale of available rule-based systems and the quality of expert annotations. We demonstrate superior performance of a biomedically pretrained BERT model first trained on annotations of a rule-based labeler and then finetuned on a small set of expert annotations augmented with automated backtranslation. We find that our final model, CheXbert, is able to outperform the previous best rules-based labeler with statistical significance, setting a new SOTA for report labeling on one of the largest datasets of chest x-rays.
2018
Know What You Don’t Know: Unanswerable Questions for SQuAD
Pranav Rajpurkar
|
Robin Jia
|
Percy Liang
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context. Existing datasets either focus exclusively on answerable questions, or use automatically generated unanswerable questions that are easy to identify. To address these weaknesses, we present SQuADRUn, a new dataset that combines the existing Stanford Question Answering Dataset (SQuAD) with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuADRUn, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. SQuADRUn is a challenging natural language understanding task for existing models: a strong neural system that gets 86% F1 on SQuAD achieves only 66% F1 on SQuADRUn. We release SQuADRUn to the community as the successor to SQuAD.
2016
SQuAD: 100,000+ Questions for Machine Comprehension of Text
Pranav Rajpurkar
|
Jian Zhang
|
Konstantin Lopyrev
|
Percy Liang
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
Search
Co-authors
- Percy Liang 2
- Robin Jia 1
- Jian Zhang 1
- Konstantin Lopyrev 1
- Akshay Smit 1
- show all...