Abstract
How should conversational agents respond to verbal abuse through the user? To answer this question, we conduct a large-scale crowd-sourced evaluation of abuse response strategies employed by current state-of-the-art systems. Our results show that some strategies, such as “polite refusal”, score highly across the board, while for other strategies demographic factors, such as age, as well as the severity of the preceding abuse influence the user’s perception of which response is appropriate. In addition, we find that most data-driven models lag behind rule-based or commercial systems in terms of their perceived appropriateness.- Anthology ID:
- W19-5942
- Volume:
- Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue
- Month:
- September
- Year:
- 2019
- Address:
- Stockholm, Sweden
- Editors:
- Satoshi Nakamura, Milica Gasic, Ingrid Zukerman, Gabriel Skantze, Mikio Nakano, Alexandros Papangelis, Stefan Ultes, Koichiro Yoshino
- Venue:
- SIGDIAL
- SIG:
- SIGDIAL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 361–366
- Language:
- URL:
- https://preview.aclanthology.org/icon-24-ingestion/W19-5942/
- DOI:
- 10.18653/v1/W19-5942
- Cite (ACL):
- Amanda Cercas Curry and Verena Rieser. 2019. A Crowd-based Evaluation of Abuse Response Strategies in Conversational Agents. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 361–366, Stockholm, Sweden. Association for Computational Linguistics.
- Cite (Informal):
- A Crowd-based Evaluation of Abuse Response Strategies in Conversational Agents (Cercas Curry & Rieser, SIGDIAL 2019)
- PDF:
- https://preview.aclanthology.org/icon-24-ingestion/W19-5942.pdf
- Code
- amandacurry/metoo_corpus