Matthias Kraus


2026

While proactivity, i.e., the ability to take the initiative and anticipate requests in order to improve the effectiveness of a conversation, has been traditionally investigated in task-oriented dialogues (e.g., booking a restaurant), less work addresses proactive behaviours in task-guidance dialogues (e.g., guide to execute recipes), where the expert instructor is supposed to interact and supervise a user in a real-world setting. We analyse a corpus of video-recorded task-guided dialogues and explore two key features of proactivity in this context: (i) the impact of multimodal features, with respect to chat-based dialogues; (ii) the impact of instructions and actions grounded in a real situation. Through a comparison between task-oriented and task-guidance annotated dialogues, we find that task-guided dialogues are highly collaborative interactions, where preventing mistakes and maintaining the correct process order is essential for achieving the dialogue goal. In addition, the video information available in the task-guidance setting can be corrective for false positive proactive behaviours, although without introducing substantial differences. To support our analysis and to foster further research we provide a corpus of multimodal task-guidance dialogues annotated according to proactivity.
The increasing application of Large Language Models (LLMs) in everyday tasks and at work highlights the crucial importance of trust in human-AI collaboration, particularly when an AI system fails. This paper investigates the effectiveness of failure communication strategies for trust repair in collaborative physical tasks involving a a chat-based AI assistant. A controlled experiment in which participants built LEGO cars guided by an LLM-based AI Assistant was used to evaluate whether findings from trust repair in a virtual environment, such as chatbots, translate to an environment comprising tangible tasks, and whether the timing of trust repair influences the outcome. Results indicate that actively communicating mistakes significantly improves trust compared to a no repair strategy, and that early repair tends to be more effective, indicating that failure communication, independent of the timing, is important for an appropriate calibration of trust.

2022

Robots will eventually enter our daily lives and assist with a variety of tasks. Especially in the household domain, robots may become indispensable helpers by overtaking tedious tasks, e.g. keeping the place tidy. Their effectiveness and efficiency, however, depend on their ability to adapt to our needs, routines, and personal characteristics. Otherwise, they may not be accepted and trusted in our private domain. For enabling adaptation, the interaction between a human and a robot needs to be personalized. Therefore, the robot needs to collect personal information from the user. However, it is unclear how such sensitive data can be collected in an understandable way without losing a user’s trust in the system. In this paper, we present a conversational approach for explicitly collecting personal user information using natural dialogue. For creating a sound interactive personalization, we have developed an empathy-augmented dialogue strategy. In an online study, the empathy-augmented strategy was compared to a baseline dialogue strategy for interactive personalization. We have found the empathy-augmented strategy to perform notably friendlier. Overall, using dialogue for interactive personalization has generally shown positive user reception.

2020

Recommendation systems aim at facilitating information retrieval for users by taking into account their preferences. Based on previous user behaviour, such a system suggests items or provides information that a user might like or find useful. Nonetheless, how to provide suggestions is still an open question. Depending on the way a recommendation is communicated influences the user’s perception of the system. This paper presents an empirical study on the effects of proactive dialogue strategies on user acceptance. Therefore, an explicit strategy based on user preferences provided directly by the user, and an implicit proactive strategy, using autonomously gathered information, are compared. The results show that proactive dialogue systems significantly affect the perception of human-computer interaction. Although no significant differences are found between implicit and explicit strategies, proactivity significantly influences the user experience compared to reactive system behaviour. The study contributes new insights to the human-agent interaction and the voice user interface design. Furthermore, we discover interesting tendencies that motivate futurework.

2018

2015