What if Othello-Playing Language Models Could See?

Xinyi Chen, Yifei Yuan, Jiaang Li, Serge Belongie, Maarten de Rijke, Anders Søgaard


Abstract
Language models are often said to face a symbol grounding problem. While some have argued the problem can be solved without resort to other modalities, many have speculated that grounded learning is more efficient. We explore this question in Othello, a simplified, rule-based world that offers a controlled and interpretable testbed for studying world understanding. Building on prior work, we introduce VISOTHELLO, a multi-modal model trained jointly on move sequences and board images. Using the Othello rule understanding task, we examine whether multi-modal learning provides advantages over text-only approaches. We further evaluate robustness under semantically irrelevant perturbations and analyze the consistency of cross-modal alignment. Our results suggest that multi-modal training not only improves performance and robustness but also promotes convergence toward shared internal representations across different model architectures.
Anthology ID:
2025.findings-emnlp.673
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12598–12609
Language:
URL:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.673/
DOI:
10.18653/v1/2025.findings-emnlp.673
Bibkey:
Cite (ACL):
Xinyi Chen, Yifei Yuan, Jiaang Li, Serge Belongie, Maarten de Rijke, and Anders Søgaard. 2025. What if Othello-Playing Language Models Could See?. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 12598–12609, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
What if Othello-Playing Language Models Could See? (Chen et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.673.pdf
Checklist:
 2025.findings-emnlp.673.checklist.pdf