Continual Pre-training on Character-level Noisy Texts Makes Decoder-based Language Models Robust Few-shot Learners

Takeshi Kojima, Yutaka Matsuo, Yusuke Iwasawa


Abstract
Recent decoder-based pre-trained language models (PLMs) generally use subword tokenizers. However, adding character-level perturbations drastically changes the delimitation of texts by the tokenizers, leading to the vulnerability of PLMs. This study proposes a method of continual pre-training to convert decoder-based PLMs with subword tokenizers into perturbation-robust few-shot in-context learners. Our method continually trains decoder-based PLMs to predict the next tokens conditioning on artificially created character-level noisy texts. Since decoder-based language models are auto-regressive, we skip noised words from the target optimization. In addition, to maintain the same word prediction performance under noisy text as clean text, our method employs word distribution matching between the original PLMs and training models. We conducted experiments on various subword-based PLMs, including GPT2, Pythia, Mistral, Gemma2, and Llama3, ranging from 1B to 8B parameters. The results demonstrate that our method consistently improves the performance of few-shot in-context learning on downstream tasks which contain actual typos or misspellings as well as artificial noise.1
Anthology ID:
2025.tacl-1.38
Volume:
Transactions of the Association for Computational Linguistics, Volume 13
Month:
Year:
2025
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
831–847
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2025.tacl-1.38/
DOI:
10.1162/tacl.a.21
Bibkey:
Cite (ACL):
Takeshi Kojima, Yutaka Matsuo, and Yusuke Iwasawa. 2025. Continual Pre-training on Character-level Noisy Texts Makes Decoder-based Language Models Robust Few-shot Learners. Transactions of the Association for Computational Linguistics, 13:831–847.
Cite (Informal):
Continual Pre-training on Character-level Noisy Texts Makes Decoder-based Language Models Robust Few-shot Learners (Kojima et al., TACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2025.tacl-1.38.pdf