A model to generate adaptive multimodal job interviews with a virtual recruiter
Zoraida Callejas | Brian Ravenet | Magalie Ochs | Catherine Pelachaud
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
This paper presents an adaptive model of multimodal social behavior for embodied conversational agents. The context of this research is the training of youngsters for job interviews in a serious game where the agent plays the role of a virtual recruiter. With the proposed model the agent is able to adapt its social behavior according to the anxiety level of the trainee and a predefined difficulty level of the game. This information is used to select the objective of the system (to challenge or comfort the user), which is achieved by selecting the complexity of the next question posed and the agent’s verbal and non-verbal behavior. We have carried out a perceptive study that shows that the multimodal behavior of an agent implementing our model successfully conveys the expected social attitudes.