Huiwei Zhou

Also published as: HuiWei Zhou


2024

Temporal Knowledge Graph (TKG) reasoning has received a growing interest recently, especially in forecasting the future facts based on the historical KG sequences. Existing studies typically utilize a recurrent neural network to learn the evolutional representations of entities for temporal reasoning. However, these methods are hard to capture the complex temporal evolutional patterns such as sequential and repetitive patterns accurately. To tackle this challenge, we propose a novel Sequential and Repetitive Pattern Learning (SRPL) method, which comprehensively captures both the sequential and repetitive patterns. Specifically, a Dependency-aware Sequential Pattern Learning (DSPL) component expresses the temporal dependencies of each historical timestamp as embeddings for accurately capturing the sequential patterns across temporally adjacent facts. A Time-interval guided Repetitive Pattern Learning (TRPL) component models the irregular time intervals between historical repetitive facts for capturing the repetitive patterns. Extensive experiments on four representative benchmarks demonstrate that our proposed method outperforms state-of-the-art methods in all metrics by an obvious margin, especially on GDELT dataset, where performance improvement of MRR reaches up to 18.84%.

2020

Document-level Relation Extraction (RE) is particularly challenging due to complex semantic interactions among multiple entities in a document. Among exiting approaches, Graph Convolutional Networks (GCN) is one of the most effective approaches for document-level RE. However, traditional GCN simply takes word nodes and adjacency matrix to represent graphs, which is difficult to establish direct connections between distant entity pairs. In this paper, we propose Global Context-enhanced Graph Convolutional Networks (GCGCN), a novel model which is composed of entities as nodes and context of entity pairs as edges between nodes to capture rich global context information of entities in a document. Two hierarchical blocks, Context-aware Attention Guided Graph Convolution (CAGGC) for partially connected graphs and Multi-head Attention Guided Graph Convolution (MAGGC) for fully connected graphs, could take progressively more global context into account. Meantime, we leverage a large-scale distantly supervised dataset to pre-train a GCGCN model with curriculum learning, which is then fine-tuned on the human-annotated dataset for further improving document-level RE performance. The experimental results on DocRED show that our model could effectively capture rich global context information in the document, leading to a state-of-the-art result. Our code is available at https://github.com/Huiweizhou/GCGCN.

2019

In this paper, we propose a novel model called Adversarial Multi-Task Network (AMTN) for jointly modeling Recognizing Question Entailment (RQE) and medical Question Answering (QA) tasks. AMTN utilizes a pre-trained BioBERT model and an Interactive Transformer to learn the shared semantic representations across different task through parameter sharing mechanism. Meanwhile, an adversarial training strategy is introduced to separate the private features of each task from the shared representations. Experiments on BioNLP 2019 RQE and QA Shared Task datasets show that our model benefits from the shared representations of both tasks provided by multi-task learning and adversarial training, and obtains significant improvements upon the single-task models.
In medical domain, given a medical question, it is difficult to manually select the most relevant information from a large number of search results. BioNLP 2019 proposes Question Answering (QA) task, which encourages the use of text mining technology to automatically judge whether a search result is an answer to the medical question. The main challenge of QA task is how to mine the semantic relation between question and answer. We propose BioBERT Transformer model to tackle this challenge, which applies Transformers to extract semantic relation between different words in questions and answers. Furthermore, BioBERT is utilized to encode medical domain-specific contextualized word representations. Our method has reached the accuracy of 76.24% and spearman of 17.12% on the BioNLP 2019 QA task.

2015

2011

2010