2025
pdf
bib
abs
Towards a More Generalized Approach in Open Relation Extraction
Qing Wang
|
Yuepei Li
|
Qiao Qiao
|
Kang Zhou
|
Qi Li
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Open Relation Extraction (OpenRE) seeks to identify and extract novel relational facts between named entities from unlabeled data without pre-defined relation schemas. Traditional OpenRE methods typically assume that the unlabeled data consists solely of novel relations or is pre-divided into known and novel instances. However, in real-world scenarios, novel relations are arbitrarily distributed. In this paper, we propose a generalized OpenRE setting that considers unlabeled data as a mixture of both known and novel instances. To address this, we propose MixORE, a two-phase framework that integrates relation classification and clustering to jointly learn known and novel relations. Experiments on three benchmark datasets demonstrate that MixORE consistently outperforms competitive baselines in known relation classification and novel relation clustering. Our findings contribute to the advancement of generalized OpenRE research and real-world applications.
pdf
bib
abs
Re-Examine Distantly Supervised NER: A New Benchmark and a Simple Approach
Yuepei Li
|
Kang Zhou
|
Qiao Qiao
|
Qing Wang
|
Qi Li
Proceedings of the 31st International Conference on Computational Linguistics
Distantly-Supervised Named Entity Recognition (DS-NER) uses knowledge bases or dictionaries for annotations, reducing manual efforts but rely on large human labeled validation set. In this paper, we introduce a real-life DS-NER dataset, QTL, where the training data is annotated using domain dictionaries and the test data is annotated by domain experts. This dataset has a small validation set, reflecting real-life scenarios. Existing DS-NER approaches fail when applied to QTL, which motivate us to re-examine existing DS-NER approaches. We found that many of them rely on large validation sets and some used test set for tuning inappropriately. To solve this issue, we proposed a new approach, token-level Curriculum-based Positive-Unlabeled Learning (CuPUL), which uses curriculum learning to order training samples from easy to hard. This method stabilizes training, making it robust and effective on small validation sets. CuPUL also addresses false negative issues using the Positive-Unlabeled learning paradigm, demonstrating improved performance in real-life applications.
pdf
bib
abs
A Systematic Survey of Automatic Prompt Optimization Techniques
Kiran Ramnath
|
Kang Zhou
|
Sheng Guan
|
Soumya Smruti Mishra
|
Xuan Qi
|
Zhengyuan Shen
|
Shuai Wang
|
Sangmin Woo
|
Sullam Jeoung
|
Yawei Wang
|
Haozhu Wang
|
Han Ding
|
Yuzhe Lu
|
Zhichao Xu
|
Yun Zhou
|
Balasubramaniam Srinivasan
|
Qiaojing Yan
|
Yueyan Chen
|
Haibo Ding
|
Panpan Xu
|
Lin Lee Cheong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Since the advent of large language models (LLMs), prompt engineering has been a crucial step for eliciting desired responses for various Natural Language Processing (NLP) tasks. However, prompt engineering remains an impediment for end users due to rapid advances in models, tasks, and associated best practices. To mitigate this, Automatic Prompt Optimization (APO) techniques have recently emerged that use various automated techniques to help improve the performance of LLMs on various tasks. In this paper, we present a comprehensive survey summarizing the current progress and remaining challenges in this field. We provide a formal definition of APO, a 5-part unifying framework, and then proceed to rigorously categorize all relevant works based on their salient features therein. We hope to spur further research guided by our framework.
pdf
bib
abs
IPR: Intelligent Prompt Routing with User-Controlled Quality-Cost Trade-offs
Aosong Feng
|
Balasubramaniam Srinivasan
|
Yun Zhou
|
Zhichao Xu
|
Kang Zhou
|
Sheng Guan
|
Yueyan Chen
|
Xian Wu
|
Ninad Kulkarni
|
Yi Zhang
|
Zhengyuan Shen
|
Dmitriy Bespalov
|
Soumya Smruti Mishra
|
Yifei Teng
|
Darren Yow-Bang Wang
|
Haibo Ding
|
Lin Lee Cheong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Routing incoming queries to the most cost-effective LLM while maintaining response quality poses a fundamental challenge in optimizing performance-cost trade-offs for large-scale commercial systems.We present IPRâa quality-constrained Intelligent Prompt Routing framework that dynamically selects optimal models based on predicted response quality and user-specified tolerance levels.IPR introduces three key innovations: (1) a modular architecture with lightweight quality estimators trained on 1.5M prompts annotated with calibrated quality scores, enabling fine-grained quality prediction across model families; (2) a user-controlled routing mechanism with tolerance parameter đ â [0,1] that provides explicit control over quality-cost trade-offs; and (3) an extensible design using frozen encoders with model-specific adapters, reducing new model integration from days to hours. To rigorously train and evaluate IPR, we curate an industrial-level IPR dataset, a comprehensive benchmark containing 1.5 million examples with response quality annotations across 11 LLM candidates.Deployed on a major cloud platform, IPR achieves 43.9% cost reduction while maintaining quality parity with the strongest model in the Claude family and processes requests with sub-150ms latency.
pdf
bib
abs
Investigating Context Faithfulness in Large Language Models: The Roles of Memory Strength and Evidence Style
Yuepei Li
|
Kang Zhou
|
Qiao Qiao
|
Bach Nguyen
|
Qing Wang
|
Qi Li
Findings of the Association for Computational Linguistics: ACL 2025
Retrieval-augmented generation (RAG) improves Large Language Models (LLMs) by incorporating external information into the response generation process. However, how context-faithful LLMs are and what factors influence LLMsâ context faithfulness remain largely unexplored. In this study, we investigate the impact of memory strength and evidence presentation on LLMsâ receptiveness to external evidence. We quantify the memory strength of LLMs by measuring the divergence in LLMsâ responses to different paraphrases of the same question, which is not considered by previous works. We also generate evidence in various styles to examine LLMsâ behavior. Our results show that for questions with high memory strength, LLMs are more likely to rely on internal memory. Furthermore, presenting paraphrased evidence significantly increases LLMsâ receptiveness compared to simple repetition or adding details. These findings provide key insights for improving retrieval-augmented generation and context-aware LLMs. Our code is available at https://github.com/liyp0095/ContextFaithful.
pdf
bib
abs
Black-Box Visual Prompt Engineering for Mitigating Object Hallucination in Large Vision Language Models
Sangmin Woo
|
Kang Zhou
|
Yun Zhou
|
Shuai Wang
|
Sheng Guan
|
Haibo Ding
|
Lin Lee Cheong
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Large Vision Language Models (LVLMs) often suffer from object hallucination, which undermines their reliability. Surprisingly, we find that simple object-based visual promptingâoverlaying visual cues (e.g., bounding box, circle) on imagesâcan significantly mitigate such hallucination; however, different visual prompts (VPs) vary in effectiveness. To address this, we propose Black-Box Visual Prompt Engineering (BBVPE), a framework to identify optimal VPs that enhance LVLM responses without needing access to model internals. Our approach employs a pool of candidate VPs and trains a router model to dynamically select the most effective VP for a given input image. This black-box approach is model-agnostic, making it applicable to both open-source and proprietary LVLMs. Evaluations on benchmarks such as POPE and CHAIR demonstrate that BBVPE effectively reduces object hallucination.
2024
pdf
bib
abs
GenDecider: Integrating âNone of the Candidatesâ Judgments in Zero-Shot Entity Linking Re-ranking
Kang Zhou
|
Yuepei Li
|
Qing Wang
|
Qiao Qiao
|
Qi Li
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
We introduce GenDecider, a novel re-ranking approach for Zero-Shot Entity Linking (ZSEL), built on the Llama model. It innovatively detects scenarios where the correct entity is not among the retrieved candidates, a common oversight in existing re-ranking methods. By autoregressively generating outputs based on the context of the entity mention and the candidate entities, GenDecider significantly enhances disambiguation, improving the accuracy and reliability of ZSEL systems, as demonstrated on the benchmark ZESHEL dataset. Our code is available at https://github.com/kangISU/GenDecider.
2023
pdf
bib
abs
Improving Unsupervised Relation Extraction by Augmenting Diverse Sentence Pairs
Qing Wang
|
Kang Zhou
|
Qiao Qiao
|
Yuepei Li
|
Qi Li
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Unsupervised relation extraction (URE) aims to extract relations between named entities from raw text without requiring manual annotations or pre-existing knowledge bases. In recent studies of URE, researchers put a notable emphasis on contrastive learning strategies for acquiring relation representations. However, these studies often overlook two important aspects: the inclusion of diverse positive pairs for contrastive learning and the exploration of appropriate loss functions. In this paper, we propose AugURE with both within-sentence pairs augmentation and augmentation through cross-sentence pairs extraction to increase the diversity of positive pairs and strengthen the discriminative power of contrastive learning. We also identify the limitation of noise-contrastive estimation (NCE) loss for relation representation learning and propose to apply margin loss for sentence pairs. Experiments on NYT-FB and TACRED datasets demonstrate that the proposed relation representation learning and a simple K-Means clustering achieves state-of-the-art performance.
2022
pdf
bib
abs
Distantly Supervised Named Entity Recognition via Confidence-Based Multi-Class Positive and Unlabeled Learning
Kang Zhou
|
Yuepei Li
|
Qi Li
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
In this paper, we study the named entity recognition (NER) problem under distant supervision. Due to the incompleteness of the external dictionaries and/or knowledge bases, such distantly annotated training data usually suffer from a high false negative rate. To this end, we formulate the Distantly Supervised NER (DS-NER) problem via Multi-class Positive and Unlabeled (MPU) learning and propose a theoretically and practically novel CONFidence-based MPU (Conf-MPU) approach. To handle the incomplete annotations, Conf-MPU consists of two steps. First, a confidence score is estimated for each token of being an entity token. Then, the proposed Conf-MPU risk estimation is applied to train a multi-class classifier for the NER task. Thorough experiments on two benchmark datasets labeled by various external knowledge demonstrate the superiority of the proposed Conf-MPU over existing DS-NER methods. Our code is available at Github.