Train It and Forget It: Merge Lists are Unnecessary for BPE Inference in Language Models

Tomohiro Sawada, Kartik Goyal


Abstract
Standard Byte-Pair Encoding (BPE) tokenization compresses text by pairing a learned token vocabulary with a detailed merge list. Recent work has shown that this merge list exposes a potential attack surface for extracting information about language model’s training data. In this paper, we explore the downstream impact of BPE inference algorithms that do not rely on this merge list at all, and hence differ from the encoding process during the BPE training. To address this question, we investigate two broad classes of BPE inference schemes that differ from BPE application during training: a) targetted deviation from merge-lists including random merge orders, and various corruptions of merge list involving deletion/truncation, and b) non-targetted BPE inference algorithms that do not depend on the merge list but focus on compressing the text either greedily or exactly. Extensive experiments across diverse language modeling tasks like accuracy-based QA bench- marks, machine translation, and open-ended generation reveal that while the targetted deviation from the merge lists exhibit significant degradation in language model performance, the non-targetted merge-list free inference algorithms result in minimal impact on downstream performance that is often much smaller than expected. These findings pave way for simpler and potentially more privacy-preserving tokenization schemes that do not catastrophically compromise model performance.
Anthology ID:
2025.emnlp-main.1775
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
35033–35046
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1775/
DOI:
Bibkey:
Cite (ACL):
Tomohiro Sawada and Kartik Goyal. 2025. Train It and Forget It: Merge Lists are Unnecessary for BPE Inference in Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 35033–35046, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Train It and Forget It: Merge Lists are Unnecessary for BPE Inference in Language Models (Sawada & Goyal, EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1775.pdf
Checklist:
 2025.emnlp-main.1775.checklist.pdf