Jiashi Lin
2025
SaCa: A Highly Compatible Reinforcing Framework for Knowledge Graph Embedding via Structural Pattern Contrast
Jiashi Lin
|
Changhong Jiang
|
Yixiao Wang
|
Xinyi Zhu
|
Zhongtian Hu
|
Wei Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025
Knowledge Graph Embedding (KGE) seeks to learn latent representations of entities and relations to support knowledge-driven AI systems. However, existing KGE approaches often exhibit a growing discrepancy between the learned embedding space and the intrinsic structural semantics of the underlying knowledge graph. This divergence primarily stems from the over-reliance on geometric criteria for assessing triple plausibility, whose effectiveness is inherently limited by the sparsity of factual triples and the disregard of higher-order structural dependencies in the knowledge graph. To overcome this limitation, we introduce Structure-aware Calibration (SaCa), a versatile framework designed to calibrate KGEs through the integration of global structural patterns. SaCa designs two new components: (i) Structural Proximity Measurement, which captures multi-order structural signals from both entity and entity-relation perspectives; and (ii) KG-Induced Soft-weighted Contrastive Learning (KISCL), which assigns soft weights to hard-to-distinguish positive and negative pairs, enabling the model to better reflect nuanced structural dependencies. Extensive experiments on seven benchmarks demonstrate that SaCa consistently boosts performance across ten KGE models on link prediction and entity classification tasks with minimal overhead.
2024
Improving Knowledge Graph Completion with Structure-Aware Supervised Contrastive Learning
Jiashi Lin
|
Lifang Wang
|
Xinyu Lu
|
Zhongtian Hu
|
Wei Zhang
|
Wenxuan Lu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Knowledge Graphs (KGs) often suffer from incomplete knowledge, which restricts their utility. Recently, Contrastive Learning (CL) has been introduced to Knowledge Graph Completion (KGC), significantly improving the discriminative capabilities of KGC models and setting new benchmarks in performance. However, existing contrastive methods primarily focus on individual triples, overlooking the broader structural connectivities and topologies of KGs. This narrow focus limits a comprehensive understanding of the graph’s structural knowledge. To address this gap, we propose StructKGC, a novel contrastive learning framework designed to flexibly accommodate the diverse topologies inherent in KGs. We introduce four contrastive tasks specifically tailored to KG data: Vertex-level CL, Neighbor-level CL, Path-level CL, and Relation composition level CL. These tasks are trained synergistically during the fine-tuning of pre-trained language models (PLMs), allowing for a more nuanced capture of subgraph semantics. To validate the effectiveness of our method, we perform a comprehensive set of experiments on several real-world datasets. The experimental results demonstrate that our approach achieves SOTA performance under standard supervised and low-resource settings. Furthermore, we observe that the various structure-aware tasks introduced can mutually reinforce each other, resulting in consistent performance enhancements.
Search
Fix author
Co-authors
- Zhongtian Hu 2
- Wei Zhang 2
- Changhong Jiang 1
- Xinyu Lu 1
- Wenxuan Lu 1
- show all...