DialCrowd 2.0: A Quality-Focused Dialog System Crowdsourcing Toolkit

Jessica Huynh, Ting-Rui Chiang, Jeffrey Bigham, Maxine Eskenazi


Abstract
Dialog system developers need high-quality data to train, fine-tune and assess their systems. They often use crowdsourcing for this since it provides large quantities of data from many workers. However, the data may not be of sufficiently good quality. This can be due to the way that the requester presents a task and how they interact with the workers. This paper introduces DialCrowd 2.0 to help requesters obtain higher quality data by, for example, presenting tasks more clearly and facilitating effective communication with workers. DialCrowd 2.0 guides developers in creating improved Human Intelligence Tasks (HITs) and is directly applicable to the workflows used currently by developers and researchers.
Anthology ID:
2022.lrec-1.134
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
1256–1263
Language:
URL:
https://aclanthology.org/2022.lrec-1.134
DOI:
Bibkey:
Cite (ACL):
Jessica Huynh, Ting-Rui Chiang, Jeffrey Bigham, and Maxine Eskenazi. 2022. DialCrowd 2.0: A Quality-Focused Dialog System Crowdsourcing Toolkit. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 1256–1263, Marseille, France. European Language Resources Association.
Cite (Informal):
DialCrowd 2.0: A Quality-Focused Dialog System Crowdsourcing Toolkit (Huynh et al., LREC 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.lrec-1.134.pdf