Hua Li
2025
Task Facet Learning: A Structured Approach To Prompt Optimization
Gurusha Juneja
|
Gautam Jajoo
|
Hua Li
|
Jian Jiao
|
Nagarajan Natarajan
|
Amit Sharma
Findings of the Association for Computational Linguistics: ACL 2025
Given a task in the form of a basic description and its training examples, prompt optimization is the problem of synthesizing the given information into a text prompt for a large language model. Humans solve this problem by also considering the different facets that define a task (e.g., counter-examples, explanations, analogies) and including them in the prompt. However, it is unclear whether existing algorithmic approaches, based on iteratively editing a given prompt or automatically selecting a few in-context examples, can cover the multiple facets required to solve a complex task. In this work, we view prompt optimization as that of learning multiple facets of a task from a set of training examples. We exploit structure in the prompt optimization problem and break down a prompt into loosely coupled semantic sections. The proposed algorithm, UniPrompt, (1) clusters the input space and uses clustered batches so that each batch likely corresponds to a different facet of the task, and (2) utilizes a feedback mechanism to propose adding, editing or deleting a section, which in turn is aggregated over a batch to capture generalizable facets. Empirical evaluation on multiple datasets and a real-world task shows that prompts generated using UniPrompt obtain higher accuracy than human-tuned prompts and those from state-of-the-art methods. In particular, our algorithm can generate long, complex prompts that existing methods are unable to generate.
2022
Leveraging Seq2seq Language Generation for Multi-level Product Issue Identification
Yang Liu
|
Varnith Chordia
|
Hua Li
|
Siavash Fazeli Dehkordy
|
Yifei Sun
|
Vincent Gao
|
Na Zhang
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
In a leading e-commerce business, we receive hundreds of millions of customer feedback from different text communication channels such as product reviews. The feedback can contain rich information regarding customers’ dissatisfaction in the quality of goods and services. To harness such information to better serve customers, in this paper, we created a machine learning approach to automatically identify product issues and uncover root causes from the customer feedback text. We identify issues at two levels: coarse grained (L-Coarse) and fine grained (L-Granular). We formulate this multi-level product issue identification problem as a seq2seq language generation problem. Specifically, we utilize transformer-based seq2seq models due to their versatility and strong transfer-learning capability. We demonstrate that our approach is label efficient and outperforms the traditional approach such as multi-class multi-label classification formulation. Based on human evaluation, our fine-tuned model achieves 82.1% and 95.4% human-level performance for L-Coarse and L-Granular issue identification, respectively. Furthermore, our experiments illustrate that the model can generalize to identify unseen L-Granular issues.
Search
Fix author
Co-authors
- Varnith Chordia 1
- Siavash Fazeli Dehkordy 1
- Vincent Gao 1
- Gautam Jajoo 1
- Jian Jiao 1
- show all...