Nikolas McNeal
2022
Thinking about GPT-3 In-Context Learning for Biomedical IE? Think Again
Bernal Jimenez Gutierrez
|
Nikolas McNeal
|
Clayton Washington
|
You Chen
|
Lang Li
|
Huan Sun
|
Yu Su
Findings of the Association for Computational Linguistics: EMNLP 2022
Large pre-trained language models (PLMs) such as GPT-3 have shown strong in-context learning capabilities, which are highly appealing for domains such as biomedicine that feature high and diverse demands of language technologies but also high data annotation costs. In this paper, we present the first systematic and comprehensive study to compare the few-shot performance of GPT-3 in-context learning with fine-tuning smaller (i.e., BERT-sized) PLMs on two representative biomedical information extraction (IE) tasks: named entity recognition and relation extraction. We follow the true few-shot setting to avoid overestimating models’ few-shot performance by model selection over a large validation set. We also optimize GPT-3’s performance with known techniques such as contextual calibration and dynamic in-context example retrieval. However, our results show that GPT-3 still significantly underperforms compared to simply fine-tuning a smaller PLM. In addition, GPT-3 in-context learning also yields smaller gains in accuracy when more training data becomes available. More in-depth analyses further reveal issues of in-context learning that may be detrimental to IE tasks in general. Given the high cost of experimenting with GPT-3, we hope our study provides helpful guidance for biomedical researchers and practitioners towards more practical solutions such as fine-tuning small PLMs before better in-context learning is available for biomedical IE.
Search
Co-authors
- Bernal Jiménez Gutiérrez 1
- Clayton Washington 1
- You Chen 1
- Lang Li 1
- Huan Sun 1
- show all...
- Yu Su 1