Yingtai Li
2025
A General Knowledge Injection Framework for ICD Coding
Xu Zhang
|
Kun Zhang
|
Wenxin Ma
|
Rongsheng Wang
|
Chenxu Wu
|
Yingtai Li
|
S Kevin Zhou
Findings of the Association for Computational Linguistics: ACL 2025
ICD Coding aims to assign a wide range of medical codes to a medical text document, which is a popular and challenging task in the healthcare domain. To alleviate the problems of long-tail distribution and the lack of annotations of code-specific evidence, many previous works have proposed incorporating code knowledge to improve coding performance. However, existing methods often focus on a single type of knowledge and design specialized modules that are complex and incompatible with each other, thereby limiting their scalability and effectiveness. To address this issue, we propose GKI-ICD, a novel, general knowledge injection framework that integrates three key types of knowledge, namely ICD Description, ICD Synonym, and ICD Hierarchy, without specialized design of additional modules. The comprehensive utilization of the above knowledge, which exhibits both differences and complementarity, can effectively enhance the ICD coding performance. Extensive experiments on existing popular ICD coding benchmarks demonstrate the effectiveness of GKI-ICD, which achieves the state-of-the-art performance on most evaluation metrics. Code is available at https://github.com/xuzhang0112/GKI-ICD.
2020
AprilE: Attention with Pseudo Residual Connection for Knowledge Graph Embedding
Yuzhang Liu
|
Peng Wang
|
Yingtai Li
|
Yizhan Shao
|
Zhongkai Xu
Proceedings of the 28th International Conference on Computational Linguistics
Knowledge graph embedding maps entities and relations into low-dimensional vector space. However, it is still challenging for many existing methods to model diverse relational patterns, especially symmetric and antisymmetric relations. To address this issue, we propose a novel model, AprilE, which employs triple-level self-attention and pseudo residual connection to model relational patterns. The triple-level self-attention treats head entity, relation, and tail entity as a sequence and captures the dependency within a triple. At the same time the pseudo residual connection retains primitive semantic features. Furthermore, to deal with symmetric and antisymmetric relations, two schemas of score function are designed via a position-adaptive mechanism. Experimental results on public datasets demonstrate that our model can produce expressive knowledge embedding and significantly outperforms most of the state-of-the-art works.
Search
Fix author
Co-authors
- Yuzhang Liu 1
- Wenxin Ma 1
- Yizhan Shao 1
- Peng Wang 1
- Rongsheng Wang 1
- show all...