A Multimodal Translation-Based Approach for Knowledge Graph Representation Learning

Hatem Mousselly-Sergieh, Teresa Botschen, Iryna Gurevych, Stefan Roth


Abstract
Current methods for knowledge graph (KG) representation learning focus solely on the structure of the KG and do not exploit any kind of external information, such as visual and linguistic information corresponding to the KG entities. In this paper, we propose a multimodal translation-based approach that defines the energy of a KG triple as the sum of sub-energy functions that leverage both multimodal (visual and linguistic) and structural KG representations. Next, a ranking-based loss is minimized using a simple neural network architecture. Moreover, we introduce a new large-scale dataset for multimodal KG representation learning. We compared the performance of our approach to other baselines on two standard tasks, namely knowledge graph completion and triple classification, using our as well as the WN9-IMG dataset. The results demonstrate that our approach outperforms all baselines on both tasks and datasets.
Anthology ID:
S18-2027
Volume:
Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Venues:
*SEM | SemEval
SIGs:
SIGLEX | SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
225–234
Language:
URL:
https://aclanthology.org/S18-2027
DOI:
10.18653/v1/S18-2027
Bibkey:
Cite (ACL):
Hatem Mousselly-Sergieh, Teresa Botschen, Iryna Gurevych, and Stefan Roth. 2018. A Multimodal Translation-Based Approach for Knowledge Graph Representation Learning. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 225–234, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
A Multimodal Translation-Based Approach for Knowledge Graph Representation Learning (Mousselly-Sergieh et al., SemEval 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/S18-2027.pdf
Data
ImageNet