Nina Zhou
2022
CXR Data Annotation and Classification with Pre-trained Language Models
Nina Zhou
|
Ai Ti Aw
|
Zhuo Han Liu
|
Cher heng Tan
|
Yonghan Ting
|
Wen Xiang Chen
|
Jordan sim zheng Ting
Proceedings of the 29th International Conference on Computational Linguistics
Clinical data annotation has been one of the major obstacles for applying machine learning approaches in clinical NLP. Open-source tools such as NegBio and CheXpert are usually designed on data from specific institutions, which limit their applications to other institutions due to the differences in writing style, structure, language use as well as label definition. In this paper, we propose a new weak supervision annotation framework with two improvements compared to existing annotation frameworks: 1) we propose to select representative samples for efficient manual annotation; 2) we propose to auto-annotate the remaining samples, both leveraging on a self-trained sentence encoder. This framework also provides a function for identifying inconsistent annotation errors. The utility of our proposed weak supervision annotation framework is applicable to any given data annotation task, and it provides an efficient form of sample selection and data auto-annotation with better classification results for real applications.
2016
A Word Labeling Approach to Thai Sentence Boundary Detection and POS Tagging
Nina Zhou
|
AiTi Aw
|
Nattadaporn Lertcheva
|
Xuancong Wang
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers
Previous studies on Thai Sentence Boundary Detection (SBD) mostly assumed sentence ends at a space disambiguation problem, which classified space either as an indicator for Sentence Boundary (SB) or non-Sentence Boundary (nSB). In this paper, we propose a word labeling approach which treats space as a normal word, and detects SB between any two words. This removes the restriction for SB to be oc-curred only at space and makes our system more robust for modern Thai writing. It is because in modern Thai writing, space is not consistently used to indicate SB. As syntactic information contributes to better SBD, we further propose a joint Part-Of-Speech (POS) tagging and SBD framework based on Factorial Conditional Random Field (FCRF) model. We compare the performance of our proposed ap-proach with reported methods on ORCHID corpus. We also performed experiments of FCRF model on the TaLAPi corpus. The results show that the word labelling approach has better performance than pre-vious space-based classification approaches and FCRF joint model outperforms LCRF model in terms of SBD in all experiments.
Search
Co-authors
- Aiti Aw 2
- Zhuo Han Liu 1
- Cher heng Tan 1
- Yonghan Ting 1
- Wen Xiang Chen 1
- show all...