Hongye Tan


2024

pdf
FRVA: Fact-Retrieval and Verification Augmented Entailment Tree Generation for Explainable Question Answering
Yue Fan | Hu Zhang | Ru Li | YuJie Wang | Hongye Tan | Jiye Liang
Findings of the Association for Computational Linguistics: ACL 2024

Structured entailment tree can exhibit the reasoning chains from knowledge facts to predicted answers, which is important for constructing an explainable question answering system. Existing works mainly include directly generating the entire tree and stepwise generating the proof steps. The stepwise methods can exploit combinatoriality and generalize to longer steps, but they have large fact search spaces and error accumulation problems resulting in the generation of invalid steps. In this paper, inspired by the Dual Process Theory in cognitive science, we propose FRVA, a Fact-Retrieval and Verification Augmented bidirectional entailment tree generation method that contains two systems. Specifically, System 1 makes intuitive judgments through the fact retrieval module and filters irrelevant facts to reduce the search space. System 2 designs a deductive-abductive bidirectional reasoning module, and we construct cross-verification and multi-view contrastive learning to make the generated proof steps closer to the target hypothesis. We enhance the reliability of the stepwise proofs to mitigate error propagation. Experiment results on EntailmentBank show that FRVA outperforms previous models and achieves state-of-the-art performance in fact selection and structural correctness.

pdf
Hyperspherical Multi-Prototype with Optimal Transport for Event Argument Extraction
Guangjun Zhang | Hu Zhang | YuJie Wang | Ru Li | Hongye Tan | Jiye Liang
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Event Argument Extraction (EAE) aims to extract arguments for specified events from a text. Previous research has mainly focused on addressing long-distance dependencies of arguments, modeling co-occurrence relationships between roles and events, but overlooking potential inductive biases: (i) semantic differences among arguments of the same type and (ii) large margin separation between arguments of the different types. Inspired by prototype networks, we introduce a new model named HMPEAE, which takes the two inductive biases above as targets to locate prototypes and guide the model to learn argument representations based on these prototypes.Specifically, we set multiple prototypes to represent each role to capture intra-class differences. Simultaneously, we use hypersphere as the output space for prototypes, defining large margin separation between prototypes to encourage the model to learn significant differences between different types of arguments effectively.We solve the “argument-prototype” assignment as an optimal transport problem to optimize the argument representation and minimize the absolute distance between arguments and prototypes to achieve compactness within sub-clusters. Experimental results on the RAMS and WikiEvents datasets show that HMPEAE achieves state-of-the-art performances.

2023

pdf
Improving Sequential Model Editing with Fact Retrieval
Xiaoqi Han | Ru Li | Hongye Tan | Wang Yuanlong | Qinghua Chai | Jeff Pan
Findings of the Association for Computational Linguistics: EMNLP 2023

The task of sequential model editing is to fix erroneous knowledge in Pre-trained Language Models (PLMs) efficiently, precisely and continuously. Although existing methods can deal with a small number of modifications, these methods experience a performance decline or require additional annotated data, when the number of edits increases. In this paper, we propose a Retrieval Augmented Sequential Model Editing framework (RASE) that leverages factual information to enhance editing generalization and to guide the identification of edits by retrieving related facts from the fact-patch memory we constructed. Our main findings are: (i) State-of-the-art models can hardly correct massive mistakes stably and efficiently; (ii) Even if we scale up to thousands of edits, RASE can significantly enhance editing generalization and maintain consistent performance and efficiency; (iii) RASE can edit large-scale PLMs and increase the performance of different editors. Moreover, it can integrate with ChatGPT and further improve performance. Our code and data are available at: https://github.com/sev777/RASE.

pdf
CCL23-Eval 任务9总结报告:汉语高考阅读理解对抗鲁棒评测 (Overview of CCL23-Eval Task 9: Adversarial Robustness Evaluation for Chinese Gaokao Reading Comprehension)
Yaxin Guo (郭亚鑫) | Guohang Yan (闫国航) | Hongye Tan (谭红叶) | Ru Li (李茹)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)

“汉语高考阅读理解对抗鲁棒评测任务致力于提升机器阅读理解模型在复杂、真实对抗环境下的鲁棒性。本次任务设计了四种对抗攻击策略(关键词扰动、推理逻辑扰动、时空属性扰动、因果关系扰动),构建了对抗鲁棒子集GCRC advRobust。任务需要根据给定的文章和问题从4个选项中选择正确的答案。本次评测受到工业界和学术界的广泛关注,共有29支队伍报名参赛,但由于难度较大,仅有8支队伍提交了结果。有关该任务的所有技术信息,包括系统提交、官方结果以及支持资源和软件的链接,可从任务网站获取1。”

2021

pdf
GCRC: A New Challenging MRC Dataset from Gaokao Chinese for Explainable Evaluation
Hongye Tan | Xiaoyue Wang | Yu Ji | Ru Li | Xiaoli Li | Zhiwei Hu | Yunxiao Zhao | Xiaoqi Han
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

pdf
Frame Semantic-Enhanced Sentence Modeling for Sentence-level Extractive Text Summarization
Yong Guan | Shaoru Guo | Ru Li | Xiaoli Li | Hongye Tan
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Sentence-level extractive text summarization aims to select important sentences from a given document. However, it is very challenging to model the importance of sentences. In this paper, we propose a novel Frame Semantic-Enhanced Sentence Modeling for Extractive Summarization, which leverages Frame semantics to model sentences from both intra-sentence level and inter-sentence level, facilitating the text summarization task. In particular, intra-sentence level semantics leverage Frames and Frame Elements to model internal semantic structure within a sentence, while inter-sentence level semantics leverage Frame-to-Frame relations to model relationships among sentences. Extensive experiments on two benchmark corpus CNN/DM and NYT demonstrate that our model outperforms six state-of-the-art methods significantly.

2020

pdf
Incorporating Syntax and Frame Semantics in Neural Network for Machine Reading Comprehension
Shaoru Guo | Yong Guan | Ru Li | Xiaoli Li | Hongye Tan
Proceedings of the 28th International Conference on Computational Linguistics

Machine reading comprehension (MRC) is one of the most critical yet challenging tasks in natural language understanding(NLU), where both syntax and semantics information of text are essential components for text understanding. It is surprising that jointly considering syntax and semantics in neural networks was never formally reported in literature. This paper makes the first attempt by proposing a novel Syntax and Frame Semantics model for Machine Reading Comprehension (SS-MRC), which takes full advantage of syntax and frame semantics to get richer text representation. Our extensive experimental results demonstrate that SS-MRC performs better than ten state-of-the-art technologies on machine reading comprehension task.

pdf
A Frame-based Sentence Representation for Machine Reading Comprehension
Shaoru Guo | Ru Li | Hongye Tan | Xiaoli Li | Yong Guan | Hongyan Zhao | Yueping Zhang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Sentence representation (SR) is the most crucial and challenging task in Machine Reading Comprehension (MRC). MRC systems typically only utilize the information contained in the sentence itself, while human beings can leverage their semantic knowledge. To bridge the gap, we proposed a novel Frame-based Sentence Representation (FSR) method, which employs frame semantic knowledge to facilitate sentence modelling. Specifically, different from existing methods that only model lexical units (LUs), Frame Representation Models, which utilize both LUs in frame and Frame-to-Frame (F-to-F) relations, are designed to model frames and sentences with attention schema. Our proposed FSR method is able to integrate multiple-frame semantic information to get much better sentence representations. Our extensive experimental results show that it performs better than state-of-the-art technologies on machine reading comprehension task.

2014

pdf
Detection on Inconsistency of Verb Phrase in TreeBank
Chaoqun Duan | Dequan Zheng | Conghui Zhu | Sheng Li | Hongye Tan
Proceedings of the Third CIPS-SIGHAN Joint Conference on Chinese Language Processing

2008

pdf
A Chinese Word Segmentation System Based on Cascade Model
Jianfeng Zhang | Jiaheng Zheng | Hu Zhang | Hongye Tan
Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing