Xiaohui Hu
Also published as: XiaoHui Hu
2022
Supporting Medical Relation Extraction via Causality-Pruned Semantic Dependency Forest
Yifan Jin
|
Jiangmeng Li
|
Zheng Lian
|
Chengbo Jiao
|
Xiaohui Hu
Proceedings of the 29th International Conference on Computational Linguistics
Medical Relation Extraction (MRE) task aims to extract relations between entities in medical texts. Traditional relation extraction methods achieve impressive success by exploring the syntactic information, e.g., dependency tree. However, the quality of the 1-best dependency tree for medical texts produced by an out-of-domain parser is relatively limited so that the performance of medical relation extraction method may degenerate. To this end, we propose a method to jointly model semantic and syntactic information from medical texts based on causal explanation theory. We generate dependency forests consisting of the semantic-embedded 1-best dependency tree. Then, a task-specific causal explainer is adopted to prune the dependency forests, which are further fed into a designed graph convolutional network to learn the corresponding representation for downstream task. Empirically, the various comparisons on benchmark medical datasets demonstrate the effectiveness of our model.
2020
Infusing Sequential Information into Conditional Masked Translation Model with Self-Review Mechanism
Pan Xie
|
Zhi Cui
|
Xiuying Chen
|
XiaoHui Hu
|
Jianwei Cui
|
Bin Wang
Proceedings of the 28th International Conference on Computational Linguistics
Non-autoregressive models generate target words in a parallel way, which achieve a faster decoding speed but at the sacrifice of translation accuracy. To remedy a flawed translation by non-autoregressive models, a promising approach is to train a conditional masked translation model (CMTM), and refine the generated results within several iterations. Unfortunately, such approach hardly considers the sequential dependency among target words, which inevitably results in a translation degradation. Hence, instead of solely training a Transformer-based CMTM, we propose a Self-Review Mechanism to infuse sequential information into it. Concretely, we insert a left-to-right mask to the same decoder of CMTM, and then induce it to autoregressively review whether each generated word from CMTM is supposed to be replaced or kept. The experimental results (WMT14 En ↔ De and WMT16 En ↔ Ro) demonstrate that our model uses dramatically less training computations than the typical CMTM, as well as outperforms several state-of-the-art non-autoregressive models by over 1 BLEU. Through knowledge distillation, our model even surpasses a typical left-to-right Transformer model, while significantly speeding up decoding.
Search
Co-authors
- Pan Xie 1
- Zhi Cui 1
- Xiuying Chen 1
- Jianwei Cui 1
- Bin Wang 1
- show all...