DiffChat: Learning to Chat with Text-to-Image Synthesis Models for Interactive Image Creation

Jiapeng Wang, Chengyu Wang, Tingfeng Cao, Jun Huang, Lianwen Jin


Abstract
We present DiffChat, a novel method to align Large Language Models (LLMs) to “chat” with prompt-as-input Text-to-Image Synthesis (TIS)models (e.g., Stable Diffusion) for interactive image creation. Given a raw prompt/image and a user-specified instruction, DiffChat can effectively make appropriate modifications and generate the target prompt, which can be leveraged to create the target image of high quality. To achieve this, we first collect an instruction-following prompt engineering dataset named InstructPE for the supervised training of DiffChat.Next, we propose a reinforcement learning framework with the feedback of three core criteria for image creation, i.e., aesthetics, user preference and content integrity. It involves an action-space dynamic modification technique to obtain more relevant positive samples and harder negative samples during the off-policy sampling. Content integrity is also introduced into the value estimation function for further improvement of produced images. Our method can exhibit superior performance than baseline models and strong competitors based on both automatic and human evaluations, which fully demonstrates its effectiveness.
Anthology ID:
2024.findings-acl.522
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8826–8840
Language:
URL:
https://aclanthology.org/2024.findings-acl.522
DOI:
Bibkey:
Cite (ACL):
Jiapeng Wang, Chengyu Wang, Tingfeng Cao, Jun Huang, and Lianwen Jin. 2024. DiffChat: Learning to Chat with Text-to-Image Synthesis Models for Interactive Image Creation. In Findings of the Association for Computational Linguistics ACL 2024, pages 8826–8840, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
DiffChat: Learning to Chat with Text-to-Image Synthesis Models for Interactive Image Creation (Wang et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.findings-acl.522.pdf