Andrei Andriiainen
2025
Quantifying Logical Consistency in Transformers via Query-Key Alignment
Eduard Tulchinskii
|
Laida Kushnareva
|
Anastasia Voznyuk
|
Andrei Andriiainen
|
Irina Piontkovskaya
|
Evgeny Burnaev
|
Serguei Barannikov
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) excel at many NLP tasks, yet their multi-step logical reasoning remains unreliable. Existing solutions such as Chain-of-Thought prompting generate intermediate steps but provide no internal check of their logical coherence. In this paper, we use the “QK-score”, a lightweight metric based on query–key alignments within transformer attention heads, to evaluate the logical reasoning capabilities of LLMs. Our method automatically identifies attention heads that play a key role in distinguishing valid from invalid logical inferences, enabling efficient inference-time evaluation via a single forward pass. It reveals latent reasoning structure in LLMs and provides a scalable mechanistic alternative to ablation-based analysis. Across three benchmarks: ProntoQA-OOD, PARARULE-Plus, and MultiLogicEval, with models ranging from 1.5B to 70B parameters, the selected heads predict logical validity up to 14% better than the model probabilities, and remain robust under distractors and increasing reasoning depth of d≤ 6.