2025
pdf
bib
abs
Synergistic Weak-Strong Collaboration by Aligning Preferences
Yizhu Jiao
|
Xuchao Zhang
|
Zhaoyang Wang
|
Yubo Ma
|
Zhun Deng
|
Rujia Wang
|
Chetan Bansal
|
Saravan Rajmohan
|
Jiawei Han
|
Huaxiu Yao
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Current Large Language Models excel in general reasoning yet struggle with specialized tasks requiring proprietary or domain-specific knowledge. Fine-tuning large models for every niche application is often infeasible due to black-box constraints and high computational overhead. To address this, we propose a collaborative framework that pairs a specialized weak model with a general strong model. The weak model, tailored to specific domains, produces initial drafts and background information, while the strong model leverages its advanced reasoning to refine these drafts, extending LLMs’ capabilities to critical yet specialized tasks. To optimize this collaboration, we introduce a collaborative feedback to fine-tunes the weak model, which quantifies the influence of the weak model’s contributions in the collaboration procedure and establishes preference pairs to guide preference tuning of the weak model. We validate our framework through experiments on three domains. We find that the collaboration significantly outperforms each model alone by leveraging complementary strengths. Moreover, aligning the weak model with the collaborative preference further enhances overall performance.
pdf
bib
abs
Synergizing Unsupervised Episode Detection with LLMs for Large-Scale News Events
Priyanka Kargupta
|
Yunyi Zhang
|
Yizhu Jiao
|
Siru Ouyang
|
Jiawei Han
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
State-of-the-art automatic event detection struggles with interpretability and adaptability to evolving large-scale key events—unlike episodic structures, which excel in these areas. Often overlooked, episodes represent cohesive clusters of core entities performing actions at a specific time and location; a partially ordered sequence of episodes can represent a key event. This paper introduces a novel task, **episode detection**, which identifies episodes within a news corpus of key event articles. Detecting episodes poses unique challenges, as they lack explicit temporal or locational markers and cannot be merged using semantic similarity alone. While large language models (LLMs) can aid with these reasoning difficulties, they suffer with long contexts typical of news corpora. To address these challenges, we introduce **EpiMine**, an unsupervised framework that identifies a key event’s candidate episodes by leveraging natural episodic partitions in articles, estimated through shifts in discriminative term combinations. These candidate episodes are more cohesive and representative of true episodes, synergizing with LLMs to better interpret and refine them into final episodes. We apply EpiMine to our three diverse, real-world event datasets annotated at the episode level, where it achieves a 59.2% average gain across all metrics compared to baselines.
2024
pdf
bib
abs
Text2DB: Integration-Aware Information Extraction with Large Language Model Agents
Yizhu Jiao
|
Sha Li
|
Sizhe Zhou
|
Heng Ji
|
Jiawei Han
Findings of the Association for Computational Linguistics: ACL 2024
The task of information extraction (IE) is to extract structured knowledge from text. However, it is often not straightforward to utilize IE output due to the mismatch between the IE ontology and the downstream application needs. We propose a new formulation of IE, Text2DB, that emphasizes the integration of IE output and the target database (or knowledge base). Given a user instruction, a document set, and a database, our task requires the model to update the database with values from the document set to satisfy the user instruction. This task requires understanding user instructions for what to extract and adapting to the given DB/KB schema for how to extract on the fly. To evaluate this new task, we introduce a new benchmark featuring common demands such as data infilling, row population, and column addition. In addition, we propose an LLM agent framework OPAL (Observe-Plan-Analyze LLM) which includes an Observer component that interacts with the database, the Planner component that generates a code-based plan with calls to IE models, and the Analyzer component that provides feedback regarding code quality before execution. Experiments show that OPAL can successfully adapt to diverse database schemas by generating different code plans and calling the required IE models. We also highlight difficult cases such as dealing with large databases with complex dependencies and extraction hallucination, which we believe deserve further investigation.
2023
pdf
bib
abs
The Shifted and The Overlooked: A Task-oriented Investigation of User-GPT Interactions
Siru Ouyang
|
Shuohang Wang
|
Yang Liu
|
Ming Zhong
|
Yizhu Jiao
|
Dan Iter
|
Reid Pryzant
|
Chenguang Zhu
|
Heng Ji
|
Jiawei Han
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Recent progress in Large Language Models (LLMs) has produced models that exhibit remarkable performance across a variety of NLP tasks. However, it remains unclear whether the existing focus of NLP research accurately captures the genuine requirements of human users. This paper provides a comprehensive analysis of the divergence between academic research in NLP and the needs of real-world NLP applications via a large-scale collection of user-GPT conversations. We analyze a large-scale collection of real user queries to GPT. We compare these queries against existing NLP benchmark tasks and identify a significant gap between the tasks that users frequently request from LLMs and the tasks that are commonly studied in academic research. For example, we find that tasks such as “design” and “planning” are prevalent in user interactions but largely neglected or different from traditional NLP benchmarks. We investigate these overlooked tasks, dissect the practical challenges, and provide insights toward a roadmap to make LLMs better aligned with user needs.
pdf
bib
abs
Instruct and Extract: Instruction Tuning for On-Demand Information Extraction
Yizhu Jiao
|
Ming Zhong
|
Sha Li
|
Ruining Zhao
|
Siru Ouyang
|
Heng Ji
|
Jiawei Han
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Large language models with instruction-following capabilities open the door to a wider group of users. However, when it comes to information extraction – a classic task in natural language processing – most task-specific systems cannot align well with long-tail ad hoc extraction use cases for non-expert users. To address this, we propose a novel paradigm, termed On-Demand Information Extraction, to fulfill the personalized demands of real-world users. Our task aims to follow the instructions to extract the desired content from the associated text and present it in a structured tabular format. The table headers can either be user-specified or inferred contextually by the model. To facilitate research in this emerging area, we present a benchmark named InstructIE, inclusive of both automatically generated training data, as well as the human-annotated test set. Building on InstructIE, we further develop an On-Demand Information Extractor, ODIE. Comprehensive evaluations on our benchmark reveal that ODIE substantially outperforms the existing open-source models of similar size.
pdf
bib
abs
Reaction Miner: An Integrated System for Chemical Reaction Extraction from Textual Data
Ming Zhong
|
Siru Ouyang
|
Yizhu Jiao
|
Priyanka Kargupta
|
Leo Luo
|
Yanzhen Shen
|
Bobby Zhou
|
Xianrui Zhong
|
Xuan Liu
|
Hongxiang Li
|
Jinfeng Xiao
|
Minhao Jiang
|
Vivian Hu
|
Xuan Wang
|
Heng Ji
|
Martin Burke
|
Huimin Zhao
|
Jiawei Han
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Chemical reactions, as a core entity in the realm of chemistry, hold crucial implications in diverse areas ranging from hands-on laboratory research to advanced computational drug design. Despite a burgeoning interest in employing NLP techniques to extract these reactions, aligning this task with the real-world requirements of chemistry practitioners remains an ongoing challenge. In this paper, we present Reaction Miner, a system specifically designed to interact with raw scientific literature, delivering precise and more informative chemical reactions. Going beyond mere extraction, Reaction Miner integrates a holistic workflow: it accepts PDF files as input, bypassing the need for pre-processing and bolstering user accessibility. Subsequently, a text segmentation module ensures that the refined text encapsulates complete chemical reactions, augmenting the accuracy of extraction. Moreover, Reaction Miner broadens the scope of existing pre-defined reaction roles, including vital attributes previously neglected, thereby offering a more comprehensive depiction of chemical reactions. Evaluations conducted by chemistry domain users highlight the efficacy of each module in our system, demonstrating Reaction Miner as a powerful tool in this field.
pdf
bib
abs
ReactIE: Enhancing Chemical Reaction Extraction with Weak Supervision
Ming Zhong
|
Siru Ouyang
|
Minhao Jiang
|
Vivian Hu
|
Yizhu Jiao
|
Xuan Wang
|
Jiawei Han
Findings of the Association for Computational Linguistics: ACL 2023
Structured chemical reaction information plays a vital role for chemists engaged in laboratory work and advanced endeavors such as computer-aided drug design. Despite the importance of extracting structured reactions from scientific literature, data annotation for this purpose is cost-prohibitive due to the significant labor required from domain experts. Consequently, the scarcity of sufficient training data poses an obstacle to the progress of related models in this domain. In this paper, we propose ReactIE, which combines two weakly supervised approaches for pre-training. Our method utilizes frequent patterns within the text as linguistic cues to identify specific characteristics of chemical reactions. Additionally, we adopt synthetic data from patent records as distant supervision to incorporate domain knowledge into the model. Experiments demonstrate that ReactIE achieves substantial improvements and outperforms all existing baselines.
2022
pdf
bib
abs
Towards a Unified Multi-Dimensional Evaluator for Text Generation
Ming Zhong
|
Yang Liu
|
Da Yin
|
Yuning Mao
|
Yizhu Jiao
|
Pengfei Liu
|
Chenguang Zhu
|
Heng Ji
|
Jiawei Han
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Multi-dimensional evaluation is the dominant paradigm for human evaluation in Natural Language Generation (NLG), i.e., evaluating the generated text from multiple explainable dimensions, such as coherence and fluency. However, automatic evaluation in NLG is still dominated by similarity-based metrics, and we lack a reliable framework for a more comprehensive evaluation of advanced models. In this paper, we propose a unified multi-dimensional evaluator UniEval for NLG. We re-frame NLG evaluation as a Boolean Question Answering (QA) task, and by guiding the model with different questions, we can use one evaluator to evaluate from multiple dimensions. Furthermore, thanks to the unified Boolean QA format, we are able to introduce an intermediate learning phase that enables UniEval to incorporate external knowledge from multiple related tasks and gain further improvement. Experiments on three typical NLG tasks show that UniEval correlates substantially better with human judgments than existing metrics. Specifically, compared to the top-performing unified evaluators, UniEval achieves a 23% higher correlation on text summarization, and over 43% on dialogue response generation. Also, UniEval demonstrates a strong zero-shot learning ability for unseen evaluation dimensions and tasks. Source code, data, and all pre-trained evaluators are available at https://github.com/maszhongming/UniEval.
pdf
bib
abs
Unsupervised Multi-Granularity Summarization
Ming Zhong
|
Yang Liu
|
Suyu Ge
|
Yuning Mao
|
Yizhu Jiao
|
Xingxing Zhang
|
Yichong Xu
|
Chenguang Zhu
|
Michael Zeng
|
Jiawei Han
Findings of the Association for Computational Linguistics: EMNLP 2022
Text summarization is a user-preference based task, i.e., for one document, users often have different priorities for the summary. As a key aspect of customization in summarization, granularity is used to measure the semantic coverage between the summary and source document. However, developing systems that can generate summaries with customizable semantic coverage is still an under-explored topic. In this paper, we propose the first unsupervised multi-granularity summarization framework, GranuSum. We take events as the basic semantic units of the source documents and propose to rank these events by their salience. We also develop a model to summarize input documents with given events as anchors and hints. By inputting different numbers of events, GranuSum is capable of producing multi-granular summaries in an unsupervised manner. Meanwhile, we annotate a new benchmark GranuDUC that contains multiple summaries at different granularities for each document cluster. Experimental results confirm the substantial superiority of GranuSum on multi-granularity summarization over strong baselines. Furthermore, by exploiting the event information, GranuSum also exhibits state-of-the-art performance under the conventional unsupervised abstractive setting.
pdf
bib
abs
Open-Vocabulary Argument Role Prediction For Event Extraction
Yizhu Jiao
|
Sha Li
|
Yiqing Xie
|
Ming Zhong
|
Heng Ji
|
Jiawei Han
Findings of the Association for Computational Linguistics: EMNLP 2022
The argument role in event extraction refers to the relation between an event and an argument participating in it. Despite the great progress in event extraction, existing studies still depend on roles pre-defined by domain experts. These studies expose obvious weakness when extending to emerging event types or new domains without available roles. Therefore, more attention and effort needs to be devoted to automatically customizing argument roles. In this paper, we define this essential but under-explored task: open-vocabulary argument role prediction. The goal of this task is to infer a set of argument roles for a given event type. We propose a novel unsupervised framework, RolePred for this task. Specifically, we formulate the role prediction problem as an in-filling task and construct prompts for a pre-trained language model to generate candidate roles. By extracting and analyzing the candidate arguments, the event-specific roles are further merged and selected. To standardize the research of this task, we collect a new human-annotated event extraction dataset including 143 customized argument roles with rich semantics. On this dataset, RolePred outperforms the existing methods by a large margin.
pdf
bib
abs
RESIN-11: Schema-guided Event Prediction for 11 Newsworthy Scenarios
Xinya Du
|
Zixuan Zhang
|
Sha Li
|
Pengfei Yu
|
Hongwei Wang
|
Tuan Lai
|
Xudong Lin
|
Ziqi Wang
|
Iris Liu
|
Ben Zhou
|
Haoyang Wen
|
Manling Li
|
Darryl Hannan
|
Jie Lei
|
Hyounghun Kim
|
Rotem Dror
|
Haoyu Wang
|
Michael Regan
|
Qi Zeng
|
Qing Lyu
|
Charles Yu
|
Carl Edwards
|
Xiaomeng Jin
|
Yizhu Jiao
|
Ghazaleh Kazeminejad
|
Zhenhailong Wang
|
Chris Callison-Burch
|
Mohit Bansal
|
Carl Vondrick
|
Jiawei Han
|
Dan Roth
|
Shih-Fu Chang
|
Martha Palmer
|
Heng Ji
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations
We introduce RESIN-11, a new schema-guided event extraction&prediction framework that can be applied to a large variety of newsworthy scenarios. The framework consists of two parts: (1) an open-domain end-to-end multimedia multilingual information extraction system with weak-supervision and zero-shot learningbased techniques. (2) schema matching and schema-guided event prediction based on our curated schema library. We build a demo website based on our dockerized system and schema library publicly available for installation (
https://github.com/RESIN-KAIROS/RESIN-11). We also include a video demonstrating the system.