Repetition Neurons: How Do Language Models Produce Repetitions?

Tatsuya Hiraoka, Kentaro Inui


Abstract
This paper introduces repetition neurons, which can be regarded as “skill neurons” responsible for the repetition problem in text generation tasks. These neurons are progressively activated more strongly as repetition continues, indicating that they perceive repetition as a task to copy the previous context repeatedly, similar to in-context learning. We identify these repetition neurons by comparing activation values before and after the onset of repetition in texts generated by recent pre-trained language models. We analyze the repetition neurons in three English and one Japanese pre-trained language models and observe similar patterns across them.
Anthology ID:
2025.naacl-short.41
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
483–495
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-short.41/
DOI:
Bibkey:
Cite (ACL):
Tatsuya Hiraoka and Kentaro Inui. 2025. Repetition Neurons: How Do Language Models Produce Repetitions?. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), pages 483–495, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Repetition Neurons: How Do Language Models Produce Repetitions? (Hiraoka & Inui, NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-short.41.pdf