Kun Liu
2021
Noisy-Labeled NER with Confidence Estimation
Kun Liu
|
Yao Fu
|
Chuanqi Tan
|
Mosha Chen
|
Ningyu Zhang
|
Songfang Huang
|
Sheng Gao
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Recent studies in deep learning have shown significant progress in named entity recognition (NER). However, most existing works assume clean data annotation, while real-world scenarios typically involve a large amount of noises from a variety of sources (e.g., pseudo, weak, or distant annotations). This work studies NER under a noisy labeled setting with calibrated confidence estimation. Based on empirical observations of different training dynamics of noisy and clean labels, we propose strategies for estimating confidence scores based on local and global independence assumptions. We partially marginalize out labels of low confidence with a CRF model. We further propose a calibration method for confidence scores based on the structure of entity labels. We integrate our approach into a self-training framework for boosting performance. Experiments in general noisy settings with four languages and distantly labeled settings demonstrate the effectiveness of our method.
2019
A Prism Module for Semantic Disentanglement in Name Entity Recognition
Kun Liu
|
Shen Li
|
Daqi Zheng
|
Zhengdong Lu
|
Sheng Gao
|
Si Li
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Natural Language Processing has been perplexed for many years by the problem that multiple semantics are mixed inside a word, even with the help of context. To solve this problem, we propose a prism module to disentangle the semantic aspects of words and reduce noise at the input layer of a model. In the prism module, some words are selectively replaced with task-related semantic aspects, then these denoised word representations can be fed into downstream tasks to make them easier. Besides, we also introduce a structure to train this module jointly with the downstream model without additional data. This module can be easily integrated into the downstream model and significantly improve the performance of baselines on named entity recognition (NER) task. The ablation analysis demonstrates the rationality of the method. As a side effect, the proposed method also provides a way to visualize the contribution of each word.
Search
Co-authors
- Sheng Gao 2
- Yao Fu 1
- Chuanqi Tan 1
- Mosha Chen 1
- Ningyu Zhang 1
- show all...