Yin Hua
Also published as: 华 尹
2025
Beyond Completion: A Foundation Model for General Knowledge Graph Reasoning
Yin Hua
|
Zhiqiang Liu
|
Mingyang Chen
|
Zheng Fang
|
Chi Man Wong
|
Lingxiao Li
|
Chi Man Vong
|
Huajun Chen
|
Wen Zhang
Findings of the Association for Computational Linguistics: ACL 2025
In natural language processing (NLP) and computer vision (CV), the successful application of foundation models across diverse tasks has demonstrated their remarkable potential. However, despite the rich structural and textual information embedded in knowledge graphs (KGs), existing research of foundation model for KG has primarily focused on their structural aspects, with most efforts restricted to in-KG tasks (e.g., knowledge graph completion, KGC). This limitation has hindered progress in addressing more challenging out-of-KG tasks. In this paper, we introduce MERRY, a foundation model for general knowledge graph reasoning, and investigate its performance across two task categories: in-KG reasoning tasks (e.g., KGC) and out-of-KG tasks (e.g., KG question answering, KGQA). We not only utilize the structural information, but also the textual information in KGs. Specifically, we propose a multi-perspective Conditional Message Passing (CMP) encoding architecture to bridge the gap between textual and structural modalities, enabling their seamless integration. Additionally, we introduce a dynamic residual fusion module to selectively retain relevant textual information and a flexible edge scoring mechanism to adapt to diverse downstream tasks. Comprehensive evaluations on 28 datasets demonstrate that MERRY outperforms existing baselines in most scenarios, showcasing strong reasoning capabilities within KGs and excellent generalization to out-of-KG tasks such as KGQA.
2024
混合 LoRA 专家的中文抽象语义表示解析框架
Wu Zihao (吴梓浩)
|
Yin Hua (尹华)
|
Gao Ziqian (高子千)
|
Zhang Jiajia (张佳佳)
|
Ji Yuelei (季跃蕾)
|
Tang Kuntian (唐堃添)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“本文介绍了我们在第二十三届中国计算语言学大会中文抽象语义表示解析评测任务中提交的参赛系统。抽象语义表示 (Abstract Meaning Representation,AMR) 使用有向无环图对句子进行建模,以语义概念作为节点,关系标签作为边,表示一个句子的语义。我们受到结合语法信息的 AMR 解析研究的启发,提出混合 LoRA(Low-Rank Adaption) 专家的 CAMR 解析框架,该框架包含一个由大型语言模型微调而来的基础 CAMR 解析器和 4 个句类专家和 1 个古汉语 LoRA 专家模型。最终,本文所提出的框架在三个评测数据集中均取得了最好的成绩。”
Search
Fix author
Co-authors
- Mingyang Chen 1
- Huajun Chen 1
- Zheng Fang 1
- Zhang Jiajia (张佳佳) 1
- Tang Kuntian (唐堃添) 1
- show all...