Punctuations and Predicates in Language Models

Sonakshi Chauhan, Maheep Chaudhary, Choy Kwan Kiu, Samuel Nellessen, Nandi Schoots


Abstract
In this paper we explore where information is collected and how it is propagated throughout layers in large language models (LLMs). We begin by examining the surprising computational importance of punctuation tokens which previous work has identified as attention sinks and memory aids. Using intervention-based techniques, we evaluate the necessity and sufficiency of punctuation tokens across layers in GPT-2, DeepSeek, and Gemma. Our results show stark model-specific differences: for GPT-2, punctuation is both necessary and sufficient in multiple layers, while this holds far less in DeepSeek and not at all in Gemma. Extending beyond punctuation, we ask whether LLMs process different components of input (e.g., subjects, adjectives, punctuation, full sentences) by forming early static summaries reused across the network, or if the model remains sensitive to changes in these components across layers. We investigate whether different reasoning rules are processed differently by LLMs. In particular, through interchange intervention and layer-swapping experiments, we find that conditional statements (if, then), and universal quantification (for all) are processed very differently. Our findings offer new insight into the internal mechanisms of punctuation usage and reasoning in LLMs and have implications for interpretability and model analysis.
Anthology ID:
2026.findings-eacl.297
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5622–5636
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.297/
DOI:
Bibkey:
Cite (ACL):
Sonakshi Chauhan, Maheep Chaudhary, Choy Kwan Kiu, Samuel Nellessen, and Nandi Schoots. 2026. Punctuations and Predicates in Language Models. In Findings of the Association for Computational Linguistics: EACL 2026, pages 5622–5636, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Punctuations and Predicates in Language Models (Chauhan et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.297.pdf
Checklist:
 2026.findings-eacl.297.checklist.pdf