Abstract
This article outlines a methodology that uses crowdsourcing to reduce the workload of experts for complex semantic tasks. We split turker-annotated datasets into a high-agreement block, which is not modified, and a low-agreement block, which is re-annotated by experts. The resulting annotations have higher observed agreement. We identify different biases in the annotation for both turkers and experts.- Anthology ID:
- L14-1399
- Volume:
- Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
- Month:
- May
- Year:
- 2014
- Address:
- Reykjavik, Iceland
- Editors:
- Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, Stelios Piperidis
- Venue:
- LREC
- SIG:
- Publisher:
- European Language Resources Association (ELRA)
- Note:
- Pages:
- 229–234
- Language:
- URL:
- http://www.lrec-conf.org/proceedings/lrec2014/pdf/471_Paper.pdf
- DOI:
- Cite (ACL):
- Héctor Martínez Alonso and Lauren Romeo. 2014. Crowdsourcing as a preprocessing for complex semantic annotation tasks. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 229–234, Reykjavik, Iceland. European Language Resources Association (ELRA).
- Cite (Informal):
- Crowdsourcing as a preprocessing for complex semantic annotation tasks (Alonso & Romeo, LREC 2014)
- PDF:
- http://www.lrec-conf.org/proceedings/lrec2014/pdf/471_Paper.pdf