Transformers as Graph-to-Graph Models
James Henderson, Alireza Mohammadshahi, Andrei Coman, Lesly Miculicich
Abstract
We argue that Transformers are essentially graph-to-graph models, with sequences just being a special case. Attention weights are functionally equivalent to graph edges. Our Graph-to-Graph Transformer architecture makes this ability explicit, by inputting graph edges into the attention weight computations and predicting graph edges with attention-like functions, thereby integrating explicit graphs into the latent graphs learned by pretrained Transformers. Adding iterative graph refinement provides a joint embedding of input, output, and latent graphs, allowing non-autoregressive graph prediction to optimise the complete graph without any bespoke pipeline or decoding strategy. Empirical results show that this architecture achieves state-of-the-art accuracies for modelling a variety of linguistic structures, integrating very effectively with the latent linguistic representations learned by pretraining.- Anthology ID:
- 2023.bigpicture-1.8
- Volume:
- Proceedings of the Big Picture Workshop
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Yanai Elazar, Allyson Ettinger, Nora Kassner, Sebastian Ruder, Noah A. Smith
- Venue:
- BigPicture
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 93–107
- Language:
- URL:
- https://aclanthology.org/2023.bigpicture-1.8
- DOI:
- 10.18653/v1/2023.bigpicture-1.8
- Cite (ACL):
- James Henderson, Alireza Mohammadshahi, Andrei Coman, and Lesly Miculicich. 2023. Transformers as Graph-to-Graph Models. In Proceedings of the Big Picture Workshop, pages 93–107, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Transformers as Graph-to-Graph Models (Henderson et al., BigPicture 2023)
- PDF:
- https://preview.aclanthology.org/emnlp-22-attachments/2023.bigpicture-1.8.pdf