Silvia Tulli
2020
Learning from Explanations and Demonstrations: A Pilot Study
Silvia Tulli
|
Sebastian Wallkötter
|
Ana Paiva
|
Francisco S. Melo
|
Mohamed Chetouani
2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence
AI has become prominent in a growing number of systems, and, as a direct consequence, the desire for explainability in such systems has become prominent as well. To build explainable systems, a large portion of existing research uses various kinds of natural language technologies, e.g., text-to-speech mechanisms, or string visualizations. Here, we provide an overview of the challenges associated with natural language explanations by reviewing existing literature. Additionally, we discuss the relationship between explainability and knowledge transfer in reinforcement learning. We argue that explainability methods, in particular methods that model the recipient of an explanation, might help increasing sample efficiency. For this, we present a computational approach to optimize the learner’s performance using explanations of another agent and discuss our results in light of effective natural language explanations for humans.
Search