Deqing Fu
2025
DreamSync: Aligning Text-to-Image Generation with Image Understanding Feedback
Jiao Sun
|
Deqing Fu
|
Yushi Hu
|
Su Wang
|
Royi Rassin
|
Da-Cheng Juan
|
Dana Alon
|
Charles Herrmann
|
Sjoerd Van Steenkiste
|
Ranjay Krishna
|
Cyrus Rashtchian
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Despite their widespread success, Text-to-Image models (T2I) still struggle to produce images that are both aesthetically pleasing and faithful to the user’s input text. We introduce DreamSync, a simple yet effective training algorithm that improves T2I models to be faithful to the text input. DreamSync utilizes large vision-language models (VLMs) to effectively identify the fine-grained discrepancies between generated images and the text inputs and enable T2I models to self-improve without labeled data. First, it prompts the model to generate several candidate images for a given input text. Then, it uses two VLMs to select the best generation: a Visual Question Answering model that measures the alignment of generated images to the text, and another that measures the generation’s aesthetic quality. After selection, we use LoRA to iteratively finetune the T2I model to guide its generation towards the selected best generations. DreamSync does not need any additional human annotation, model architecture changes, or reinforcement learning. Despite its simplicity, DreamSync improves both the semantic alignment and aesthetic appeal of two diffusion-based T2I models, evidenced by multiple benchmarks (+1.7% on TIFA, +2.9% on DSG1K, +3.4% on VILA aesthetic) and human evaluation shows that DreamSync improves text rendering compared to SDXL by 18.5% on DSG1K benchmark.
2023
SCENE: Self-Labeled Counterfactuals for Extrapolating to Negative Examples
Deqing Fu
|
Ameya Godbole
|
Robin Jia
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Detecting negatives (such as non-entailment relationships, unanswerable questions, and false claims) is an important and challenging aspect of many natural language understanding tasks. Though manually collecting challenging negative examples can help models detect them, it is both costly and domain-specific. In this work, we propose Self-labeled Counterfactuals for Extrapolating to Negative Examples (SCENE), an automatic method for synthesizing training data that greatly improves models’ ability to detect challenging negative examples. In contrast with standard data augmentation, which synthesizes new examples for existing labels, SCENE can synthesize negative examples zero-shot from only positive ones. Given a positive example, SCENE perturbs it with a mask infilling model, then determines whether the resulting example is negative based on a self-training heuristic. With access to only answerable training examples, SCENE can close 69.6% of the performance gap on SQuAD 2.0, a dataset where half of the evaluation examples are unanswerable, compared to a model trained on SQuAD 2.0. Our method also extends to boolean question answering and recognizing textual entailment, and improves generalization from SQuAD to ACE-whQA, an out-of-domain extractive QA benchmark.
Search
Fix data
Co-authors
- Dana Alon 1
- Ameya Godbole 1
- Charles Herrmann 1
- Yushi Hu 1
- Robin Jia 1
- show all...