Evaluating and Enhancing the Robustness of Dialogue Systems: A Case Study on a Negotiation Agent

Minhao Cheng, Wei Wei, Cho-Jui Hsieh


Abstract
Recent research has demonstrated that goal-oriented dialogue agents trained on large datasets can achieve striking performance when interacting with human users. In real world applications, however, it is important to ensure that the agent performs smoothly interacting with not only regular users but also those malicious ones who would attack the system through interactions in order to achieve goals for their own advantage. In this paper, we develop algorithms to evaluate the robustness of a dialogue agent by carefully designed attacks using adversarial agents. Those attacks are performed in both black-box and white-box settings. Furthermore, we demonstrate that adversarial training using our attacks can significantly improve the robustness of a goal-oriented dialogue system. On a case-study of the negotiation agent developed by (Lewis et al., 2017), our attacks reduced the average advantage of rewards between the attacker and the trained RL-based agent from 2.68 to -5.76 on a scale from -10 to 10 for randomized goals. Moreover, we show that with the adversarial training, we are able to improve the robustness of negotiation agents by 1.5 points on average against all our attacks.
Anthology ID:
N19-1336
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3325–3335
Language:
URL:
https://aclanthology.org/N19-1336
DOI:
10.18653/v1/N19-1336
Bibkey:
Cite (ACL):
Minhao Cheng, Wei Wei, and Cho-Jui Hsieh. 2019. Evaluating and Enhancing the Robustness of Dialogue Systems: A Case Study on a Negotiation Agent. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3325–3335, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Evaluating and Enhancing the Robustness of Dialogue Systems: A Case Study on a Negotiation Agent (Cheng et al., NAACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/N19-1336.pdf