Jiayi Li
2024
Weight-Inherited Distillation for Task-Agnostic BERT Compression
Taiqiang Wu
|
Cheng Hou
|
Shanshan Lao
|
Jiayi Li
|
Ngai Wong
|
Zhe Zhao
|
Yujiu Yang
Findings of the Association for Computational Linguistics: NAACL 2024
Knowledge Distillation (KD) is a predominant approach for BERT compression.Previous KD-based methods focus on designing extra alignment losses for the student model to mimic the behavior of the teacher model.These methods transfer the knowledge in an indirect way.In this paper, we propose a novel Weight-Inherited Distillation (WID), which directly transfers knowledge from the teacher.WID does not require any additional alignment loss and trains a compact student by inheriting the weights, showing a new perspective of knowledge distillation.Specifically, we design the row compactors and column compactors as mappings and then compress the weights via structural re-parameterization.Experimental results on the GLUE and SQuAD benchmarks show that WID outperforms previous state-of-the-art KD-based baselines.Further analysis indicates that WID can also learn the attention patterns from the teacher model without any alignment loss on attention distributions.The code is available at https://github.com/wutaiqiang/WID-NAACL2024.
Prior Relational Schema Assists Effective Contrastive Learning for Inductive Knowledge Graph Completion
Ruilin Luo
|
Jiayi Li
|
Jianghangfan Zhang
|
Jing Xiao
|
Yujiu Yang
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Knowledge Graph Completion (KGC) is a task aimed at uncovering the inherent relationships among known knowledge triplets in a Knowledge Graph (KG) and subsequently predicting missing links. Presently, there is a rising interest in inductive knowledge graph completion, where missing links may pertain to previously unobserved entities. Previous inductive KGC methods mainly rely on descriptive information of entities to improve the representation of unseen entities, neglecting to provide effective prior knowledge for relation modeling. To tackle this challenge, we capture prior schema-level interactions related to relations by leveraging entity type information, thereby furnishing effective prior constraints when reasoning with newly introduced entities. Moreover, We employ normal in-batch negatives and introduce schema-guided negatives to bolster the efficiency of normal contrastive representation learning. Experimental results demonstrate that our approach consistently achieves state-of-the-art performance on various established metrics across multiple benchmark datasets for link prediction. Notably, our method achieves a 20.5% relative increase in Hits@1 on the HumanWiki-Ind dataset.
Search
Co-authors
- Yujiu Yang 2
- Taiqiang Wu 1
- Cheng Hou 1
- Shanshan Lao 1
- Ngai Wong 1
- show all...