Jordan sim zheng Ting


2022

pdf
CXR Data Annotation and Classification with Pre-trained Language Models
Nina Zhou | Ai Ti Aw | Zhuo Han Liu | Cher heng Tan | Yonghan Ting | Wen Xiang Chen | Jordan sim zheng Ting
Proceedings of the 29th International Conference on Computational Linguistics

Clinical data annotation has been one of the major obstacles for applying machine learning approaches in clinical NLP. Open-source tools such as NegBio and CheXpert are usually designed on data from specific institutions, which limit their applications to other institutions due to the differences in writing style, structure, language use as well as label definition. In this paper, we propose a new weak supervision annotation framework with two improvements compared to existing annotation frameworks: 1) we propose to select representative samples for efficient manual annotation; 2) we propose to auto-annotate the remaining samples, both leveraging on a self-trained sentence encoder. This framework also provides a function for identifying inconsistent annotation errors. The utility of our proposed weak supervision annotation framework is applicable to any given data annotation task, and it provides an efficient form of sample selection and data auto-annotation with better classification results for real applications.