Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale
Laurent Sartran, Samuel Barrett, Adhiguna Kuncoro, Miloš Stanojević, Phil Blunsom, Chris Dyer
Abstract
We introduce Transformer Grammars (TGs), a novel class of Transformer language models that combine (i) the expressive power, scalability, and strong performance of Transformers and (ii) recursive syntactic compositions, which here are implemented through a special attention mask and deterministic transformation of the linearized tree. We find that TGs outperform various strong baselines on sentence-level language modeling perplexity, as well as on multiple syntax-sensitive language modeling evaluation metrics. Additionally, we find that the recursive syntactic composition bottleneck which represents each sentence as a single vector harms perplexity on document-level language modeling, providing evidence that a different kind of memory mechanism—one that is independent of composed syntactic representations—plays an important role in current successful models of long text.- Anthology ID:
- 2022.tacl-1.81
- Volume:
- Transactions of the Association for Computational Linguistics, Volume 10
- Month:
- Year:
- 2022
- Address:
- Cambridge, MA
- Editors:
- Brian Roark, Ani Nenkova
- Venue:
- TACL
- SIG:
- Publisher:
- MIT Press
- Note:
- Pages:
- 1423–1439
- Language:
- URL:
- https://aclanthology.org/2022.tacl-1.81
- DOI:
- 10.1162/tacl_a_00526
- Cite (ACL):
- Laurent Sartran, Samuel Barrett, Adhiguna Kuncoro, Miloš Stanojević, Phil Blunsom, and Chris Dyer. 2022. Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale. Transactions of the Association for Computational Linguistics, 10:1423–1439.
- Cite (Informal):
- Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale (Sartran et al., TACL 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2022.tacl-1.81.pdf