Zehao Wang


2023

pdf
Few-shot Event Detection: An Empirical Study and a Unified View
Yubo Ma | Zehao Wang | Yixin Cao | Aixin Sun
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Few-shot event detection (ED) has been widely studied, while this brings noticeable discrepancies, e.g., various motivations, tasks, and experimental settings, that hinder the understanding of models for future progress. This paper presents a thorough empirical study, a unified view of ED models, and a better unified baseline. For fair evaluation, we compare 12 representative methods on three datasets, which are roughly grouped into prompt-based and prototype-based models for detailed analysis. Experiments consistently demonstrate that prompt-based methods, including ChatGPT, still significantly trail prototype-based methods in terms of overall performance. To investigate their superior performance, we break down their design elements along several dimensions and build a unified framework on prototype-based methods. Under such unified view, each prototype-method can be viewed a combination of different modules from these design elements. We further combine all advantageous modules and propose a simple yet effective baseline, which outperforms existing methods by a large margin (e.g., 2.7% F1 gains under low-resource setting).

2022

pdf
基于平行交互注意力网络的中文电子病历实体及关系联合抽取(Parallel Interactive Attention Network for Joint Entity and Relation Extraction Based on Chinese Electronic Medical Record)
LiShuang Li (李丽双) | Zehao Wang (王泽昊) | Xueyang Qin (秦雪洋) | Yuan Guanghui (袁光辉)
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“基于电子病历构建医学知识图谱对医疗技术的发展具有重要意义,实体和关系抽取是构建知识图谱的关键技术。本文针对目前实体关系联合抽取中存在的特征交互不充分的问题,提出了一种平行交互注意力网络(PIAN)以充分挖掘实体与关系的相关性,在多个标准的医学和通用数据集上取得最优结果;当前中文医学实体及关系标注数据集较少,本文基于中文电子病历构建了实体和关系抽取数据集(CEMRIE),与医学专家共同制定了语料标注规范,并基于所提出的模型实验得出基准结果。”

pdf
Prompt for Extraction? PAIE: Prompting Argument Interaction for Event Argument Extraction
Yubo Ma | Zehao Wang | Yixin Cao | Mukai Li | Meiqi Chen | Kun Wang | Jing Shao
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

In this paper, we propose an effective yet efficient model PAIE for both sentence-level and document-level Event Argument Extraction (EAE), which also generalizes well when there is a lack of training data. On the one hand, PAIE utilizes prompt tuning for extractive objectives to take the best advantages of Pre-trained Language Models (PLMs). It introduces two span selectors based on the prompt to select start/end tokens among input texts for each role. On the other hand, it captures argument interactions via multi-role prompts and conducts joint optimization with optimal span assignments via a bipartite matching loss. Also, with a flexible prompt design, PAIE can extract multiple arguments with the same role instead of conventional heuristic threshold tuning. We have conducted extensive experiments on three benchmarks, including both sentence- and document-level EAE. The results present promising improvements from PAIE (3.5% and 2.3% F1 gains in average on three benchmarks, for PAIE-base and PAIE-large respectively). Further analysis demonstrates the efficiency, generalization to few-shot settings, and effectiveness of different extractive prompt tuning strategies. Our code is available at https://github.com/mayubo2333/PAIE.

pdf
MMEKG: Multi-modal Event Knowledge Graph towards Universal Representation across Modalities
Yubo Ma | Zehao Wang | Mukai Li | Yixin Cao | Meiqi Chen | Xinze Li | Wenqi Sun | Kunquan Deng | Kun Wang | Aixin Sun | Jing Shao
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations

Events are fundamental building blocks of real-world happenings. In this paper, we present a large-scale, multi-modal event knowledge graph named MMEKG. MMEKG unifies different modalities of knowledge via events, which complement and disambiguate each other. Specifically, MMEKG incorporates (i) over 990 thousand concept events with 644 relation types to cover most types of happenings, and (ii) over 863 million instance events connected through 934 million relations, which provide rich contextual information in texts and/or images. To collect billion-scale instance events and relations among them, we additionally develop an efficient yet effective pipeline for textual/visual knowledge extraction system. We also develop an induction strategy to create million-scale concept events and a schema organizing all events and relations in MMEKG. To this end, we also provide a pipeline enabling our system to seamlessly parse texts/images to event graphs and to retrieve multi-modal knowledge at both concept- and instance-levels.