Learning to Follow Object-Centric Image Editing Instructions Faithfully

Tuhin Chakrabarty, Kanishk Singh, Arkadiy Saakyan, Smaranda Muresan


Abstract
Natural language instructions are a powerful interface for editing the outputs of text-to-image diffusion models. However, several challenges need to be addressed: 1) underspecification (the need to model the implicit meaning of instructions) 2) grounding (the need to localize where the edit has to be performed), 3) faithfulness (the need to preserve the elements of the image not affected by the edit instruction). Current approaches focusing on image editing with natural language instructions rely on automatically generated paired data, which, as shown in our investigation, is noisy and sometimes nonsensical, exacerbating the above issues. Building on recent advances in segmentation, Chain-of-Thought prompting, and visual question answering, we significantly improve the quality of the paired data. In addition, we enhance the supervision signal by highlighting parts of the image that need to be changed by the instruction. The model fine-tuned on the improved data is capable of performing fine-grained object-centric edits better than state-of-the-art baselines, mitigating the problems outlined above, as shown by automatic and human evaluations. Moreover, our model is capable of generalizing to domains unseen during training, such as visual metaphors.
Anthology ID:
2023.findings-emnlp.646
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9630–9646
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.646
DOI:
10.18653/v1/2023.findings-emnlp.646
Bibkey:
Cite (ACL):
Tuhin Chakrabarty, Kanishk Singh, Arkadiy Saakyan, and Smaranda Muresan. 2023. Learning to Follow Object-Centric Image Editing Instructions Faithfully. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9630–9646, Singapore. Association for Computational Linguistics.
Cite (Informal):
Learning to Follow Object-Centric Image Editing Instructions Faithfully (Chakrabarty et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2023.findings-emnlp.646.pdf