Saturated Transformers are Constant-Depth Threshold Circuits

William Merrill, Ashish Sabharwal, Noah A. Smith


Abstract
Transformers have become a standard neural network architecture for many NLP problems, motivating theoretical analysis of their power in terms of formal languages. Recent work has shown that transformers with hard attention are quite limited in power (Hahn, 2020), as they can be simulated by constant-depth AND/OR circuits (Hao et al., 2022). However, hard attention is a strong assumption, which may complicate the relevance of these results in practice. In this work, we analyze the circuit complexity of transformers with saturated attention: a generalization of hard attention that more closely captures the attention patterns learnable in practical transformers. We first show that saturated transformers transcend the known limitations of hard-attention transformers. We then prove saturated transformers with floating-point values can be simulated by constant-depth threshold circuits, giving the class TC0 as an upper bound on the formal languages they recognize.
Anthology ID:
2022.tacl-1.49
Volume:
Transactions of the Association for Computational Linguistics, Volume 10
Month:
Year:
2022
Address:
Cambridge, MA
Editors:
Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
843–856
Language:
URL:
https://aclanthology.org/2022.tacl-1.49
DOI:
10.1162/tacl_a_00493
Bibkey:
Cite (ACL):
William Merrill, Ashish Sabharwal, and Noah A. Smith. 2022. Saturated Transformers are Constant-Depth Threshold Circuits. Transactions of the Association for Computational Linguistics, 10:843–856.
Cite (Informal):
Saturated Transformers are Constant-Depth Threshold Circuits (Merrill et al., TACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2022.tacl-1.49.pdf
Video:
 https://preview.aclanthology.org/landing_page/2022.tacl-1.49.mp4