Taming Text-to-Image Synthesis for Novices: User-centric Prompt Generation via Multi-turn Guidance

Yilun Liu, Minggui He, Feiyu Yao, Yuhe Ji, Shimin Tao, Jingzhou Du, Justin Li, Jian Gao, Zhang Li, Hao Yang, Boxing Chen, Osamu Yoshie


Abstract
The emergence of text-to-image synthesis (TIS) models has significantly influenced digital image creation by producing high-quality visuals from written descriptions. Yet these models are sensitive on textual prompts, posing a challenge for novice users who may not be familiar with TIS prompt writing. Existing solutions relieve this via automatic prompt expansion or generation from a user query. However, this single-turn manner suffers from limited user-centricity in terms of result interpretability and user interactivity. Thus, we propose DialPrompt, a dialogue-based TIS prompt generation model that emphasizes user experience for novice users. DialPrompt is designed to follow a multi-turn workflow, where in each round of dialogue the model guides user to express their preferences on possible optimization dimensions before generating the final TIS prompt. To achieve this, we mined 15 essential dimensions for high-quality prompts from advanced users and curated a multi-turn dataset. Through training on this dataset, DialPrompt improves user-centricity by allowing users to perceive and control the creation process of TIS prompts. Experiments indicate that DialPrompt improves significantly in user-centricity score compared with existing approaches while maintaining a competitive quality of synthesized images. In our user evaluation, DialPrompt is highly rated by 19 human reviewers (especially novices).
Anthology ID:
2025.emnlp-main.444
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8805–8822
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.444/
DOI:
Bibkey:
Cite (ACL):
Yilun Liu, Minggui He, Feiyu Yao, Yuhe Ji, Shimin Tao, Jingzhou Du, Justin Li, Jian Gao, Zhang Li, Hao Yang, Boxing Chen, and Osamu Yoshie. 2025. Taming Text-to-Image Synthesis for Novices: User-centric Prompt Generation via Multi-turn Guidance. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 8805–8822, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Taming Text-to-Image Synthesis for Novices: User-centric Prompt Generation via Multi-turn Guidance (Liu et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.444.pdf
Checklist:
 2025.emnlp-main.444.checklist.pdf