Abstract
We analyse how a transformer-based language model learns the rules of chess from text data of recorded games. We show how it is possible to investigate how the model capacity and the available number of training data influence the learning success of a language model with the help of chess-specific metrics. With these metrics, we show that more games used for training in the studied range offers significantly better results for the same training time. However, model size does not show such a clear influence. It is also interesting to observe that the usual evaluation metrics for language models, predictive accuracy and perplexity, give no indication of this here. Further examination of trained models reveals how they store information about board state in the activations of neuron groups, and how the overall sequence of previous moves influences the newly-generated moves.- Anthology ID:
- 2021.ranlp-1.153
- Volume:
- Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)
- Month:
- September
- Year:
- 2021
- Address:
- Held Online
- Editors:
- Ruslan Mitkov, Galia Angelova
- Venue:
- RANLP
- SIG:
- Publisher:
- INCOMA Ltd.
- Note:
- Pages:
- 1369–1379
- Language:
- URL:
- https://aclanthology.org/2021.ranlp-1.153
- DOI:
- Cite (ACL):
- Andreas Stöckl. 2021. Watching a Language Model Learning Chess. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 1369–1379, Held Online. INCOMA Ltd..
- Cite (Informal):
- Watching a Language Model Learning Chess (Stöckl, RANLP 2021)
- PDF:
- https://preview.aclanthology.org/fix-dup-bibkey/2021.ranlp-1.153.pdf