Weight-based Analysis of Detokenization in Language Models: Understanding the First Stage of Inference Without Inference
Go Kamoda, Benjamin Heinzerling, Tatsuro Inaba, Keito Kudo, Keisuke Sakaguchi, Kentaro Inui
Abstract
According to the stages-of-inference hypothesis, early layers of language models map their subword-tokenized input, which does not necessarily correspond to a linguistically meaningful segmentation, to more meaningful representations that form the model’s “inner vocabulary”.Prior analysis of this *detokenization* stage has predominantly relied on probing and interventions such as path patching, which involve selecting particular inputs, choosing a subset of components that will be patched, and then observing changes in model behavior.Here, we show that several important aspects of the detokenization stage can be understood purely by analyzing model weights, without performing any model inference steps.Specifically, we introduce an analytical decomposition of first-layer attention in GPT-2.Our decomposition yields interpretable terms that quantify the relative contributions of position-related, token-related, and mixed effects.By focusing on terms in this decomposition, we discover weight-based explanations of attention bias toward close tokens and attention for detokenization.- Anthology ID:
- 2025.findings-naacl.355
- Volume:
- Findings of the Association for Computational Linguistics: NAACL 2025
- Month:
- April
- Year:
- 2025
- Address:
- Albuquerque, New Mexico
- Editors:
- Luis Chiruzzo, Alan Ritter, Lu Wang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6324–6343
- Language:
- URL:
- https://preview.aclanthology.org/Author-page-Marten-During-lu/2025.findings-naacl.355/
- DOI:
- Cite (ACL):
- Go Kamoda, Benjamin Heinzerling, Tatsuro Inaba, Keito Kudo, Keisuke Sakaguchi, and Kentaro Inui. 2025. Weight-based Analysis of Detokenization in Language Models: Understanding the First Stage of Inference Without Inference. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 6324–6343, Albuquerque, New Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Weight-based Analysis of Detokenization in Language Models: Understanding the First Stage of Inference Without Inference (Kamoda et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/Author-page-Marten-During-lu/2025.findings-naacl.355.pdf