Abstract
We introduce a method for error detection in automatically annotated text, aimed at supporting the creation of high-quality language resources at affordable cost. Our method combines an unsupervised generative model with human supervision from active learning. We test our approach on in-domain and out-of-domain data in two languages, in AL simulations and in a real world setting. For all settings, the results show that our method is able to detect annotation errors with high precision and high recall.- Anthology ID:
- P17-1107
- Volume:
- Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- July
- Year:
- 2017
- Address:
- Vancouver, Canada
- Editors:
- Regina Barzilay, Min-Yen Kan
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1160–1170
- Language:
- URL:
- https://aclanthology.org/P17-1107
- DOI:
- 10.18653/v1/P17-1107
- Cite (ACL):
- Ines Rehbein and Josef Ruppenhofer. 2017. Detecting annotation noise in automatically labelled data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1160–1170, Vancouver, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Detecting annotation noise in automatically labelled data (Rehbein & Ruppenhofer, ACL 2017)
- PDF:
- https://preview.aclanthology.org/emnlp22-frontmatter/P17-1107.pdf
- Data
- English Web Treebank