Mor Vered


2025

In this paper, we generate two types of explanations, Impact and Critique, of predictions made by a Goal Recognizer (GR) – a system that infers agents’ goals from observations. Impact explanations describe the top-predicted goal(s) and the main observations that led to these predictions. Critique explanations augment these explanations with evidence that challenges the GR’s predictions if so warranted. Our user study compares users’ goal-recognition accuracy for Impact and Critique explanations, and users’ views about these explanations, under three prediction-correctness conditions: correct, partially correct and incorrect. Our results show that (1) users stick with a GR’s predictions, even when a Critique explanation highlights its flaws; yet (2) Critique explanations are deemed better than Impact explanations in most respects.