GreaterPrompt: A Unified, Customizable, and High-Performing Open-Source Toolkit for Prompt Optimization

Wenliang Zheng, Sarkar Snigdha Sarathi Das, Yusen Zhang, Rui Zhang


Abstract
LLMs have gained immense popularity among researchers and the general public for its impressive capabilities on a variety of tasks. Notably, the efficacy of LLMs remains significantly dependent on the quality and structure of the input prompts, making prompt design a critical factor for their performance. Recent advancements in automated prompt optimization have introduced diverse techniques that automatically enhance prompts to better align model outputs with user expectations. However, these methods often suffer from the lack of standardization and compatibility across different techniques, limited flexibility in customization, inconsistent performance across model scales, and they often exclusively rely on expensive proprietary LLM APIs. To fill in this gap, we introduce GreaterPrompt, a novel framework that democratizes prompt optimization by unifying diverse methods under a unified, customizable API while delivering highly effective prompts for different tasks. Our framework flexibly accommodates various model scales by leveraging both text feedback-based optimization for larger LLMs and internal gradient-based optimization for smaller models to achieve powerful and precise prompt improvements. Moreover, we provide a user-friendly Web UI that ensures accessibility for non-expert users, enabling broader adoption and enhanced performance across various user groups and application scenarios. GreaterPrompt is available at https://github.com/psunlpgroup/GreaterPrompt via GitHub, PyPI, and web user interfaces.
Anthology ID:
2025.acl-demo.39
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Pushkar Mishra, Smaranda Muresan, Tao Yu
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
405–415
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-demo.39/
DOI:
Bibkey:
Cite (ACL):
Wenliang Zheng, Sarkar Snigdha Sarathi Das, Yusen Zhang, and Rui Zhang. 2025. GreaterPrompt: A Unified, Customizable, and High-Performing Open-Source Toolkit for Prompt Optimization. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), pages 405–415, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
GreaterPrompt: A Unified, Customizable, and High-Performing Open-Source Toolkit for Prompt Optimization (Zheng et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-demo.39.pdf
Copyright agreement:
 2025.acl-demo.39.copyright_agreement.pdf