Duy Dinh
2025
What Makes a Good Natural Language Prompt?
Do Xuan Long
|
Duy Dinh
|
Ngoc-Hai Nguyen
|
Kenji Kawaguchi
|
Nancy F. Chen
|
Shafiq Joty
|
Min-Yen Kan
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
As large language models (LLMs) have progressed towards more human-like and human–AI communications prevalent, prompting has emerged as a decisive component. However, there is limited conceptual consensus on what exactly quantifies natural language prompts. We attempt to address this question by conducting a meta-analysis surveying 150+ prompting-related papers from leading NLP and AI conferences (2022–2024), and blogs. We propose a property- and human-centric framework for evaluating prompt quality, encompassing 21 properties categorized into six dimensions. We then examine how existing studies assess their impact on LLMs, revealing their imbalanced support across models and tasks, and substantial research gaps. Further, we analyze correlations among properties in high-quality natural language prompts, deriving prompting recommendations. Finally, we explore multi-property prompt enhancements in reasoning tasks, observing that single-property enhancements often have the greatest impact. Our findings establish a foundation for property-centric prompt evaluation and optimization, bridging the gaps between human–AI communication and opening new prompting research directions.
Search
Fix author
Co-authors
- Nancy Chen 1
- Xuan Long Do 1
- Shafiq Joty 1
- Min-Yen Kan 1
- Kenji Kawaguchi 1
- show all...
Venues
- acl1