Token-Wise Kernels (TWiKers) for Vicinity-Aware Attention in Transformers

Kuangdai Leng, Jia Bi, Samuel Pinilla, Jaehoon Cha


Abstract
Self-attention mechanisms in transformers enable tokens to interact across a sequence but lack an explicit inductive bias to capture local contextual dependencies, an inherent characteristic of natural languages. We propose Token-Wise Kernels (TWiKers), a novel enhancement to transformers that learn token-specific convolutional kernels applied to the keys or values. Each token is assigned a small kernel, initialized to the "Central Dirac" (e.g., [0,1,0] for size=3), meaning the token "bears" the attention from all other tokens alone. During training, these kernels adapt, and greater deviation from the Central Dirac indicates stronger attention redistribution to neighboring tokens. This introduces the first transformer weights with direct semantic interpretability. Our experiments show that content words (e.g., nouns and verbs) retain self-focus, while function words (e.g., prepositions and conjunctions) shift attention toward their neighbors, aligning with their syntactic and semantic roles. We further apply TWiKers to distinguish literary genres, historical periods, and authors, demonstrating their effectiveness in capturing high-level stylistic patterns. Finally, we demonstrate the potential of TWiKers as an effective inductive bias to improve transformer training, validated across a range of downstream tasks.
Anthology ID:
2026.findings-eacl.306
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5819–5835
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.306/
DOI:
Bibkey:
Cite (ACL):
Kuangdai Leng, Jia Bi, Samuel Pinilla, and Jaehoon Cha. 2026. Token-Wise Kernels (TWiKers) for Vicinity-Aware Attention in Transformers. In Findings of the Association for Computational Linguistics: EACL 2026, pages 5819–5835, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Token-Wise Kernels (TWiKers) for Vicinity-Aware Attention in Transformers (Leng et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.306.pdf
Checklist:
 2026.findings-eacl.306.checklist.pdf