@inproceedings{sato-miyazawa-2020-quality,
    title = "Quality Estimation for Partially Subjective Classification Tasks via Crowdsourcing",
    author = "Sato, Yoshinao  and
      Miyazawa, Kouki",
    editor = "Calzolari, Nicoletta  and
      B{\'e}chet, Fr{\'e}d{\'e}ric  and
      Blache, Philippe  and
      Choukri, Khalid  and
      Cieri, Christopher  and
      Declerck, Thierry  and
      Goggi, Sara  and
      Isahara, Hitoshi  and
      Maegaard, Bente  and
      Mariani, Joseph  and
      Mazo, H{\'e}l{\`e}ne  and
      Moreno, Asuncion  and
      Odijk, Jan  and
      Piperidis, Stelios",
    booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
    month = may,
    year = "2020",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://preview.aclanthology.org/ingest-emnlp/2020.lrec-1.29/",
    pages = "229--235",
    language = "eng",
    ISBN = "979-10-95546-34-4",
    abstract = "The quality estimation of artifacts generated by creators via crowdsourcing has great significance for the construction of a large-scale data resource. A common approach to this problem is to ask multiple reviewers to evaluate the same artifacts. However, the commonly used majority voting method to aggregate reviewers' evaluations does not work effectively for partially subjective or purely subjective tasks because reviewers' sensitivity and bias of evaluation tend to have a wide variety. To overcome this difficulty, we propose a probabilistic model for subjective classification tasks that incorporates the qualities of artifacts as well as the abilities and biases of creators and reviewers as latent variables to be jointly inferred. We applied this method to the partially subjective task of speech classification into the following four attitudes: agreement, disagreement, stalling, and question. The result shows that the proposed method estimates the quality of speech more effectively than a vote aggregation, measured by correlation with a fine-grained classification by experts."
}Markdown (Informal)
[Quality Estimation for Partially Subjective Classification Tasks via Crowdsourcing](https://preview.aclanthology.org/ingest-emnlp/2020.lrec-1.29/) (Sato & Miyazawa, LREC 2020)
ACL