Xianqi Chu


2024

pdf bib
APE: Active Learning-based Tooling for Finding Informative Few-shot Examples for LLM-based Entity Matching
Kun Qian | Yisi Sang | Farima Bayat† | Anton Belyi | Xianqi Chu | Yash Govind | Samira Khorshidi | Rahul Khot | Katherine Luna | Azadeh Nikfarjam | Xiaoguang Qi | Fei Wu | Xianhan Zhang | Yunyao Li
Proceedings of the Fifth Workshop on Data Science with Human-in-the-Loop (DaSH 2024)

Prompt engineering is an iterative procedure that often requires extensive manual effort to formulate suitable instructions for effectively directing large language models (LLMs) in specific tasks. Incorporating few-shot examples is a vital and effective approach to provide LLMs with precise instructions, leading to improved LLM performance. Nonetheless, identifying the most informative demonstrations for LLMs is labor-intensive, frequently entailing sifting through an extensive search space. In this demonstration, we showcase a human-in-the-loop tool called ool (Active Prompt Engineering) designed for refining prompts through active learning. Drawing inspiration from active learning, ool iteratively selects the most ambiguous examples for human feedback, which will be transformed into few-shot examples within the prompt.