Zheng Tang


2022

pdf bib
Taxonomy Builder: a Data-driven and User-centric Tool for Streamlining Taxonomy Construction
Mihai Surdeanu | John Hungerford | Yee Seng Chan | Jessica MacBride | Benjamin Gyori | Andrew Zupon | Zheng Tang | Haoling Qiu | Bonan Min | Yan Zverev | Caitlin Hilverman | Max Thomas | Walter Andrews | Keith Alcock | Zeyu Zhang | Michael Reynolds | Steven Bethard | Rebecca Sharp | Egoitz Laparra
Proceedings of the Second Workshop on Bridging Human--Computer Interaction and Natural Language Processing

An existing domain taxonomy for normalizing content is often assumed when discussing approaches to information extraction, yet often in real-world scenarios there is none.When one does exist, as the information needs shift, it must be continually extended. This is a slow and tedious task, and one which does not scale well.Here we propose an interactive tool that allows a taxonomy to be built or extended rapidly and with a human in the loop to control precision. We apply insights from text summarization and information extraction to reduce the search space dramatically, then leverage modern pretrained language models to perform contextualized clustering of the remaining concepts to yield candidate nodes for the user to review. We show this allows a user to consider as many as 200 taxonomy concept candidates an hour, to quickly build or extend a taxonomy to better fit information needs.

2021

pdf
How May I Help You? Using Neural Text Simplification to Improve Downstream NLP Tasks
Hoang Van | Zheng Tang | Mihai Surdeanu
Findings of the Association for Computational Linguistics: EMNLP 2021

The general goal of text simplification (TS) is to reduce text complexity for human consumption. In this paper, we investigate another potential use of neural TS: assisting machines performing natural language processing (NLP) tasks. We evaluate the use of neural TS in two ways: simplifying input texts at prediction time and augmenting data to provide machines with additional information during training. We demonstrate that the latter scenario provides positive effects on machine performance on two separate datasets. In particular, the latter use of TS improves the performances of LSTM (1.82–1.98%) and SpanBERT (0.7–1.3%) extractors on TACRED, a complex, large-scale, real-world relation extraction task. Further, the same setting yields improvements of up to 0.65% matched and 0.62% mismatched accuracies for a BERT text classifier on MNLI, a practical natural language inference dataset.

pdf bib
Interpretability Rules: Jointly Bootstrapping a Neural Relation Extractorwith an Explanation Decoder
Zheng Tang | Mihai Surdeanu
Proceedings of the First Workshop on Trustworthy Natural Language Processing

We introduce a method that transforms a rule-based relation extraction (RE) classifier into a neural one such that both interpretability and performance are achieved. Our approach jointly trains a RE classifier with a decoder that generates explanations for these extractions, using as sole supervision a set of rules that match these relations. Our evaluation on the TACRED dataset shows that our neural RE classifier outperforms the rule-based one we started from by 9 F1 points; our decoder generates explanations with a high BLEU score of over 90%; and, the joint learning improves the performance of both the classifier and decoder.

2020

pdf
Exploring Interpretability in Event Extraction: Multitask Learning of a Neural Event Classifier and an Explanation Decoder
Zheng Tang | Gus Hahn-Powell | Mihai Surdeanu
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop

We propose an interpretable approach for event extraction that mitigates the tension between generalization and interpretability by jointly training for the two goals. Our approach uses an encoder-decoder architecture, which jointly trains a classifier for event extraction, and a rule decoder that generates syntactico-semantic rules that explain the decisions of the event classifier. We evaluate the proposed approach on three biomedical events and show that the decoder generates interpretable rules that serve as accurate explanations for the event classifier’s decisions, and, importantly, that the joint training generally improves the performance of the event classifier. Lastly, we show that our approach can be used for semi-supervised learning, and that its performance improves when trained on automatically-labeled data generated by a rule-based system.

2019

pdf
Eidos, INDRA, & Delphi: From Free Text to Executable Causal Models
Rebecca Sharp | Adarsh Pyarelal | Benjamin Gyori | Keith Alcock | Egoitz Laparra | Marco A. Valenzuela-Escárcega | Ajay Nagesh | Vikas Yadav | John Bachman | Zheng Tang | Heather Lent | Fan Luo | Mithun Paul | Steven Bethard | Kobus Barnard | Clayton Morrison | Mihai Surdeanu
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)

Building causal models of complicated phenomena such as food insecurity is currently a slow and labor-intensive manual process. In this paper, we introduce an approach that builds executable probabilistic models from raw, free text. The proposed approach is implemented through three systems: Eidos, INDRA, and Delphi. Eidos is an open-domain machine reading system designed to extract causal relations from natural language. It is rule-based, allowing for rapid domain transfer, customizability, and interpretability. INDRA aggregates multiple sources of causal information and performs assembly to create a coherent knowledge base and assess its reliability. This assembled knowledge serves as the starting point for modeling. Delphi is a modeling framework that assembles quantified causal fragments and their contexts into executable probabilistic models that respect the semantics of the original text, and can be used to support decision making.