Adaptive Convolution for Multi-Relational Learning

Xiaotian Jiang, Quan Wang, Bin Wang


Abstract
We consider the problem of learning distributed representations for entities and relations of multi-relational data so as to predict missing links therein. Convolutional neural networks have recently shown their superiority for this problem, bringing increased model expressiveness while remaining parameter efficient. Despite the success, previous convolution designs fail to model full interactions between input entities and relations, which potentially limits the performance of link prediction. In this work we introduce ConvR, an adaptive convolutional network designed to maximize entity-relation interactions in a convolutional fashion. ConvR adaptively constructs convolution filters from relation representations, and applies these filters across entity representations to generate convolutional features. As such, ConvR enables rich interactions between entity and relation representations at diverse regions, and all the convolutional features generated will be able to capture such interactions. We evaluate ConvR on multiple benchmark datasets. Experimental results show that: (1) ConvR performs substantially better than competitive baselines in almost all the metrics and on all the datasets; (2) Compared with state-of-the-art convolutional models, ConvR is not only more effective but also more efficient. It offers a 7% increase in MRR and a 6% increase in Hits@10, while saving 12% in parameter storage.
Anthology ID:
N19-1103
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
978–987
Language:
URL:
https://aclanthology.org/N19-1103
DOI:
10.18653/v1/N19-1103
Bibkey:
Cite (ACL):
Xiaotian Jiang, Quan Wang, and Bin Wang. 2019. Adaptive Convolution for Multi-Relational Learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 978–987, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Adaptive Convolution for Multi-Relational Learning (Jiang et al., NAACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/N19-1103.pdf
Video:
 https://vimeo.com/353480570
Data
FB15kWN18WN18RR