SQUIRE: A Sequence-to-sequence Framework for Multi-hop Knowledge Graph Reasoning

Yushi Bai, Xin Lv, Juanzi Li, Lei Hou, Yincen Qu, Zelin Dai, Feiyu Xiong


Abstract
Multi-hop knowledge graph (KG) reasoning has been widely studied in recent years to provide interpretable predictions on missing links with evidential paths. Most previous works use reinforcement learning (RL) based methods that learn to navigate the path towards the target entity. However, these methods suffer from slow and poor convergence, and they may fail to infer a certain path when there is a missing edge along the path. Here we present SQUIRE, the first Sequence-to-sequence based multi-hop reasoning framework, which utilizes an encoder-decoder Transformer structure to translate the query to a path. Our framework brings about two benefits: (1) It can learn and predict in an end-to-end fashion, which gives better and faster convergence; (2) Our transformer model does not rely on existing edges to generate the path, and has the flexibility to complete missing edges along the path, especially in sparse KGs. Experiments on standard and sparse KGs show that our approach yields significant improvement over prior methods, while converging 4x-7x faster.
Anthology ID:
2022.emnlp-main.107
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1649–1662
Language:
URL:
https://aclanthology.org/2022.emnlp-main.107
DOI:
Bibkey:
Cite (ACL):
Yushi Bai, Xin Lv, Juanzi Li, Lei Hou, Yincen Qu, Zelin Dai, and Feiyu Xiong. 2022. SQUIRE: A Sequence-to-sequence Framework for Multi-hop Knowledge Graph Reasoning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1649–1662, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
SQUIRE: A Sequence-to-sequence Framework for Multi-hop Knowledge Graph Reasoning (Bai et al., EMNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.emnlp-main.107.pdf