Xing Cui
2023
ChatEdit: Towards Multi-turn Interactive Facial Image Editing via Dialogue
Xing Cui
|
Zekun Li
|
Pei Li
|
Yibo Hu
|
Hailin Shi
|
Chunshui Cao
|
Zhaofeng He
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
This paper explores interactive facial image editing through dialogue and presents the ChatEdit benchmark dataset for evaluating image editing and conversation abilities in this context. ChatEdit is constructed from the CelebA-HQ dataset, incorporating annotated multi-turn dialogues corresponding to user editing requests on the images. The dataset is challenging, as it requires the system to dynamically track and edit images based on user requests, while generating appropriate natural language responses. To address these challenges, we propose a framework comprising a dialogue module for tracking user requests as well as generating responses, and an image editing module for editing images accordingly. Unlike previous approaches, our framework directly tracks the user request of the current turn from the entire dialogue history and edits the initial image instead of manipulating the output from the previous turn, mitigating error accumulation and attribute forgetting issues. Extensive experiments on the ChatEdit dataset demonstrate the superiority of our framework over previous methods and also improvement rooms, encouraging future research. We will release the code and data publicly to facilitate advancements in complex interactive facial image editing.
Search
Co-authors
- Zekun Li 1
- Pei Li 1
- Yibo Hu 1
- Hailin Shi 1
- Chunshui Cao 1
- show all...