Abstract
Conversational AI systems, such as Amazon’s Alexa, are rapidly developing from purely transactional systems to social chatbots, which can respond to a wide variety of user requests. In this article, we establish how current state-of-the-art conversational systems react to inappropriate requests, such as bullying and sexual harassment on the part of the user, by collecting and analysing the novel #MeTooAlexa corpus. Our results show that commercial systems mainly avoid answering, while rule-based chatbots show a variety of behaviours and often deflect. Data-driven systems, on the other hand, are often non-coherent, but also run the risk of being interpreted as flirtatious and sometimes react with counter-aggression. This includes our own system, trained on “clean” data, which suggests that inappropriate system behaviour is not caused by data bias.- Anthology ID:
- W18-0802
- Original:
- W18-0802v1
- Version 2:
- W18-0802v2
- Volume:
- Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing
- Month:
- June
- Year:
- 2018
- Address:
- New Orleans, Louisiana, USA
- Venue:
- EthNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7–14
- Language:
- URL:
- https://aclanthology.org/W18-0802
- DOI:
- 10.18653/v1/W18-0802
- Cite (ACL):
- Amanda Cercas Curry and Verena Rieser. 2018. #MeToo Alexa: How Conversational Systems Respond to Sexual Harassment. In Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing, pages 7–14, New Orleans, Louisiana, USA. Association for Computational Linguistics.
- Cite (Informal):
- #MeToo Alexa: How Conversational Systems Respond to Sexual Harassment (Cercas Curry & Rieser, EthNLP 2018)
- PDF:
- https://preview.aclanthology.org/auto-file-uploads/W18-0802.pdf