Yufei Sun
2025
HPSS: Heuristic Prompting Strategy Search for LLM Evaluators
Bosi Wen
|
Pei Ke
|
Yufei Sun
|
Cunxiang Wang
|
Xiaotao Gu
|
Jinfeng Zhou
|
Jie Tang
|
Hongning Wang
|
Minlie Huang
Findings of the Association for Computational Linguistics: ACL 2025
Since the adoption of large language models (LLMs) for text evaluation has become increasingly prevalent in the field of natural language processing (NLP), a series of existing works attempt to optimize the prompts for LLM evaluators to improve their alignment with human judgment. However, their efforts are limited to optimizing individual factors of evaluation prompts, such as evaluation criteria or output formats, neglecting the combinatorial impact of multiple factors, which leads to insufficient optimization of the evaluation pipeline. Nevertheless, identifying well-behaved prompting strategies for adjusting multiple factors requires extensive enumeration. To this end, we comprehensively integrate 8 key factors for evaluation prompts and propose a novel automatic prompting strategy optimization method called Heuristic Prompting Strategy Search (HPSS). Inspired by the genetic algorithm, HPSS conducts an iterative search to find well-behaved prompting strategies for LLM evaluators. A heuristic function is employed to guide the search process, enhancing the performance of our algorithm. Extensive experiments across four evaluation tasks demonstrate the effectiveness of HPSS, consistently outperforming both human-designed evaluation prompts and existing automatic prompt optimization methods. Our code is available athttps://github.com/thu-coai/HPSS.
2024
Modalities Should Be Appropriately Leveraged: Uncertainty Guidance for Multimodal Chinese Spelling Correction
Yongliang Lin
|
Zhen Zhang
|
Mengting Hu
|
Yufei Sun
|
Yuzhi Zhang
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Chinese spelling correction (CSC) aims to detect and correct spelling errors in Chinese texts. Most spelling errors are phonetically or graphically similar to the correct ones. Thus, recent works introduce multimodal features to obtain achievements. In this paper, we found that different spelling errors have various biases to each modality, highlighting the importance of appropriately exploiting multimodal features. To achieve this goal, we propose the UGMSC framework, which incorporates uncertainty into both the feature learning and correction stages. Specifically, the UGMSC framework makes predictions with multimodal features and estimates the uncertainty of the corresponding modalities. Then it dynamically fuses the features of all modalities for model learning, and performs spelling correction under the uncertainty-guided strategy. Experimental results on three public datasets demonstrate that the proposed approach provides a significant improvement compared with previous strong multimodal models. The proposed framework is model-agnostic and can be easily applied to other multimodal models.
Search
Fix author
Co-authors
- Xiaotao Gu 1
- Mengting Hu 1
- Minlie Huang 1
- Pei Ke 1
- Yongliang Lin 1
- show all...