Getting Reliable Annotations for Sarcasm in Online Dialogues
Reid Swanson, Stephanie Lukin, Luke Eisenberg, Thomas Corcoran, Marilyn Walker
Abstract
The language used in online forums differs in many ways from that of traditional language resources such as news. One difference is the use and frequency of nonliteral, subjective dialogue acts such as sarcasm. Whether the aim is to develop a theory of sarcasm in dialogue, or engineer automatic methods for reliably detecting sarcasm, a major challenge is simply the difficulty of getting enough reliably labelled examples. In this paper we describe our work on methods for achieving highly reliable sarcasm annotations from untrained annotators on Mechanical Turk. We explore the use of a number of common statistical reliability measures, such as Kappa, Karger’s, Majority Class, and EM. We show that more sophisticated measures do not appear to yield better results for our data than simple measures such as assuming that the correct label is the one that a majority of Turkers apply.- Anthology ID:
- L14-1046
- Volume:
- Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
- Month:
- May
- Year:
- 2014
- Address:
- Reykjavik, Iceland
- Venue:
- LREC
- SIG:
- Publisher:
- European Language Resources Association (ELRA)
- Note:
- Pages:
- 4250–4257
- Language:
- URL:
- http://www.lrec-conf.org/proceedings/lrec2014/pdf/1063_Paper.pdf
- DOI:
- Cite (ACL):
- Reid Swanson, Stephanie Lukin, Luke Eisenberg, Thomas Corcoran, and Marilyn Walker. 2014. Getting Reliable Annotations for Sarcasm in Online Dialogues. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 4250–4257, Reykjavik, Iceland. European Language Resources Association (ELRA).
- Cite (Informal):
- Getting Reliable Annotations for Sarcasm in Online Dialogues (Swanson et al., LREC 2014)
- PDF:
- http://www.lrec-conf.org/proceedings/lrec2014/pdf/1063_Paper.pdf