Yubo Feng


2024

pdf bib
Biomedical Event Causal Relation Extraction by Reasoning Optimal Entity Relation Path
Lishuang Li | Liteng Mi | Beibei Zhang | Yi Xiang | Yubo Feng | Xueyang Qin | Jingyao Tang
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)

“Biomedical Event Causal Relation Extraction (BECRE) is an important task in biomedical infor-mation extraction. Existing methods usually use pre-trained language models to learn semanticrepresentations and then predict the event causal relation. However, these methods struggle tocapture sufficient cues in biomedical texts for predicting causal relations. In this paper, we pro-pose a Path Reasoning-based Relation-aware Network (PRRN) to explore deeper cues for causalrelations using reinforcement learning. Specifically, our model reasons the relation paths betweenentity arguments of two events, namely entity relation path, which connects the two biomedicalevents through the multi-hop interactions between entities to provide richer cues for predictingevent causal relations. In PRRN, we design a path reasoning module based on reinforcementlearning and propose a novel reward function to encourage the model to focus on the length andcontextual relevance of entity relation paths. The experimental results on two datasets suggestthat PRRN brings considerable improvements over the state-of-the-art models.Introduction”

pdf bib
Triple-view Event Hierarchy Model for Biomedical Event Representation
Jiayi Huang | Lishuang Li | Xueyang Qin | Yi Xiang | Jiaqi Li | Yubo Feng
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)

“Biomedical event representation can be applied to various language tasks. A biomedical eventoften involves multiple biomedical entities and trigger words, and the event structure is complex.However, existing research on event representation mainly focuses on the general domain. Ifmodels from the general domain are directly transferred to biomedical event representation, theresults may not be satisfactory. We argue that biomedical events can be divided into three hierar-chies, each containing unique feature information. Therefore, we propose the Triple-views EventHierarchy Model (TEHM) to enhance the quality of biomedical event representation. TEHM ex-tracts feature information from three different views and integrates them. Specifically, due to thecomplexity of biomedical events, We propose the Trigger-aware Aggregator module to handlecomplex units within biomedical events. Additionally, we annotate two similarity task datasetsin the biomedical domain using annotation standards from the general domain. Extensive exper-iments demonstrate that TEHM achieves state-of-the-art performance on biomedical similaritytasks and biomedical event casual relation extraction.Introduction”

pdf bib
Temporal Cognitive Tree: A Hierarchical Modeling Approach for Event Temporal Relation Extraction
Wanting Ning | Lishuang Li | Xueyang Qin | Yubo Feng | Jingyao Tang
Findings of the Association for Computational Linguistics: EMNLP 2024

Understanding and analyzing event temporal relations is a crucial task in Natural Language Processing (NLP). This task, known as Event Temporal Relation Extraction (ETRE), aims to identify and extract temporal connections between events in text. Recent studies focus on locating the relative position of event pairs on the timeline by designing logical expressions or auxiliary tasks to predict their temporal occurrence. Despite these advances, this modeling approach neglects the multidimensional information in temporal relation and the hierarchical process of reasoning. In this study, we propose a novel hierarchical modeling approach for this task by introducing a Temporal Cognitive Tree (TCT) that mimics human logical reasoning. Additionally, we also design a integrated model incorporating prompt optimization and deductive reasoning to exploit multidimensional supervised information. Extensive experiments on TB-Dense and MATRES datasets demonstrate that our approach outperforms existing methods.

pdf bib
Event Representation Learning with Multi-Grained Contrastive Learning and Triple-Mixture of Experts
Tianqi Hu | Lishuang Li | Xueyang Qin | Yubo Feng
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Event representation learning plays a crucial role in numerous natural language processing (NLP) tasks, as it facilitates the extraction of semantic features associated with events. Current methods of learning event representation based on contrastive learning processes positive examples with single-grain random masked language model (MLM), but fall short in learn information inside events from multiple aspects. In this paper, we introduce multi-grained contrastive learning and triple-mixture of experts (MCTM) for event representation learning. Our proposed method extends the random MLM by incorporating a specialized MLM designed to capture different grammatical structures within events, which allows the model to learn token-level knowledge from multiple perspectives. Furthermore, we have observed that mask tokens with different granularities affect the model differently, therefore, we incorporate mixture of experts (MoE) to learn importance weights associated with different granularities. Our experiments demonstrate that MCTM outperforms other baselines in tasks such as hard similarity and transitive sentence similarity, highlighting the superiority of our method.