Tracr-Injection: Distilling Algorithms into Pre-trained Language Models

Tomás Vergara Browne, Alvaro Soto


Abstract
Motivated by the surge of large language models, there has been a push to formally characterize the symbolic abilities intrinsic to the transformer architecture. A programming language, called RASP, has been proposed, which can be directly compiled into transformer weights to implement these algorithms. However, the tasks that can be implemented in RASP are often uncommon to learn from natural unsupervised data, showing a mismatch between theoretical capabilities of the transformer architecture, and the practical learnability of these capabilities from unsupervised data. We propose tracr-injection, a method that allows us to distill algorithms written in RASP directly into a pre-trained language model. We showcase our method by injecting 3 different algorithms into a language model. We show how our method creates an interpretable subspace within the model’s residual stream, which can be decoded into the variables present in the code of the RASP algorithm. Additionally, we found that the proposed method can improve out-of-distribution performance compared to our baseline, indicating that indeed a more symbolic mechanism is taking place in the inner workings of the model. We release the code used to run our experiments.
Anthology ID:
2025.findings-acl.146
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2831–2843
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.146/
DOI:
Bibkey:
Cite (ACL):
Tomás Vergara Browne and Alvaro Soto. 2025. Tracr-Injection: Distilling Algorithms into Pre-trained Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 2831–2843, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Tracr-Injection: Distilling Algorithms into Pre-trained Language Models (Vergara Browne & Soto, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.146.pdf