Lin Lee Cheong


2025

pdf bib
A Systematic Survey of Automatic Prompt Optimization Techniques
Kiran Ramnath | Kang Zhou | Sheng Guan | Soumya Smruti Mishra | Xuan Qi | Zhengyuan Shen | Shuai Wang | Sangmin Woo | Sullam Jeoung | Yawei Wang | Haozhu Wang | Han Ding | Yuzhe Lu | Zhichao Xu | Yun Zhou | Balasubramaniam Srinivasan | Qiaojing Yan | Yueyan Chen | Haibo Ding | Panpan Xu | Lin Lee Cheong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Since the advent of large language models (LLMs), prompt engineering has been a crucial step for eliciting desired responses for various Natural Language Processing (NLP) tasks. However, prompt engineering remains an impediment for end users due to rapid advances in models, tasks, and associated best practices. To mitigate this, Automatic Prompt Optimization (APO) techniques have recently emerged that use various automated techniques to help improve the performance of LLMs on various tasks. In this paper, we present a comprehensive survey summarizing the current progress and remaining challenges in this field. We provide a formal definition of APO, a 5-part unifying framework, and then proceed to rigorously categorize all relevant works based on their salient features therein. We hope to spur further research guided by our framework.

pdf bib
IPR: Intelligent Prompt Routing with User-Controlled Quality-Cost Trade-offs
Aosong Feng | Balasubramaniam Srinivasan | Yun Zhou | Zhichao Xu | Kang Zhou | Sheng Guan | Yueyan Chen | Xian Wu | Ninad Kulkarni | Yi Zhang | Zhengyuan Shen | Dmitriy Bespalov | Soumya Smruti Mishra | Yifei Teng | Darren Yow-Bang Wang | Haibo Ding | Lin Lee Cheong
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track

Routing incoming queries to the most cost-effective LLM while maintaining response quality poses a fundamental challenge in optimizing performance-cost trade-offs for large-scale commercial systems.We present IPR—a quality-constrained Intelligent Prompt Routing framework that dynamically selects optimal models based on predicted response quality and user-specified tolerance levels.IPR introduces three key innovations: (1) a modular architecture with lightweight quality estimators trained on 1.5M prompts annotated with calibrated quality scores, enabling fine-grained quality prediction across model families; (2) a user-controlled routing mechanism with tolerance parameter 𝜏 ∈ [0,1] that provides explicit control over quality-cost trade-offs; and (3) an extensible design using frozen encoders with model-specific adapters, reducing new model integration from days to hours. To rigorously train and evaluate IPR, we curate an industrial-level IPR dataset, a comprehensive benchmark containing 1.5 million examples with response quality annotations across 11 LLM candidates.Deployed on a major cloud platform, IPR achieves 43.9% cost reduction while maintaining quality parity with the strongest model in the Claude family and processes requests with sub-150ms latency.

pdf bib
Black-Box Visual Prompt Engineering for Mitigating Object Hallucination in Large Vision Language Models
Sangmin Woo | Kang Zhou | Yun Zhou | Shuai Wang | Sheng Guan | Haibo Ding | Lin Lee Cheong
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)

Large Vision Language Models (LVLMs) often suffer from object hallucination, which undermines their reliability. Surprisingly, we find that simple object-based visual prompting—overlaying visual cues (e.g., bounding box, circle) on images—can significantly mitigate such hallucination; however, different visual prompts (VPs) vary in effectiveness. To address this, we propose Black-Box Visual Prompt Engineering (BBVPE), a framework to identify optimal VPs that enhance LVLM responses without needing access to model internals. Our approach employs a pool of candidate VPs and trains a router model to dynamically select the most effective VP for a given input image. This black-box approach is model-agnostic, making it applicable to both open-source and proprietary LVLMs. Evaluations on benchmarks such as POPE and CHAIR demonstrate that BBVPE effectively reduces object hallucination.