Antonia Schmidt


2025

pdf bib
Using Game Play to Investigate Multimodal and Conversational Grounding in Large Multimodal Models
Sherzod Hakimov | Yerkezhan Abdullayeva | Kushal Koshti | Antonia Schmidt | Yan Weiser | Anne Beyer | David Schlangen
Proceedings of the 31st International Conference on Computational Linguistics

While the situation has improved for text-only models, it again seems to be the case currently that multimodal (text and image) models develop faster than ways to evaluate them. In this paper, we bring a recently developed evaluation paradigm from text models to multimodal models, namely evaluation through the goal-oriented game (self) play, complementing reference-based and preference-based evaluation. Specifically, we define games that challenge a model’s capability to represent a situation from visual information and align such representations through dialogue. We find that the largest closed models perform rather well on the games that we define, while even the best open-weight models struggle with them. On further analysis, we find that the exceptional deep captioning capabilities of the largest models drive some of the performance. There is still room to grow for both kinds of models, ensuring the continued relevance of the benchmark.

pdf bib
Playpen: An Environment for Exploring Learning From Dialogue Game Feedback
Nicola Horst | Davide Mazzaccara | Antonia Schmidt | Michael Sullivan | Filippo Momentè | Luca Franceschetti | Philipp Sadler | Sherzod Hakimov | Alberto Testoni | Raffaella Bernardi | Raquel Fernández | Alexander Koller | Oliver Lemon | David Schlangen | Mario Giulianelli | Alessandro Suglia
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Interaction between learner and feedback-giver has come into focus recently for post-training of Large Language Models (LLMs), through the use of reward models that judge the appropriateness of a model’s response. In this paper, we investigate whether Dialogue Games—goal-directed and rule-governed activities driven predominantly by verbal actions—can also serve as a source of feedback signals for learning.We introduce Playpen, an environment for off- and online learning through Dialogue Game self-play, and investigate a representative set of post-training methods: supervised fine-tuning; direct alignment (DPO); and reinforcement learning with Group Relative Policy Optimization (GRPO). We experiment with post-training a small LLM (Llama-3.1-8B-Instruct), evaluating performance on unseen instances of training games as well as unseen games, and on standard benchmarks. We find that imitation learning through SFT improves performance on unseen instances, but negatively impacts other skills, while interactive learning with GRPO shows balanced improvements without loss of skills. We release the framework and the baseline training setups to foster research in this promising new direction of “learning in (synthetic) interaction”.