2025
pdf
bib
abs
RefVNLI: Towards Scalable Evaluation of Subject-driven Text-to-image Generation
Aviv Slobodkin
|
Hagai Taitelbaum
|
Yonatan Bitton
|
Brian Gordon
|
Michal Sokolik
|
Nitzan Bitton Guetta
|
Almog Gueta
|
Royi Rassin
|
Dani Lischinski
|
Idan Szpektor
Findings of the Association for Computational Linguistics: EMNLP 2025
Subject-driven text-to-image (T2I) generation aims to produce images that align with a given textual description, while preserving the visual identity from a referenced subject image. Despite its broad downstream applicability—ranging from enhanced personalization in image generation to consistent character representation in video rendering—progress in this field is limited by the lack of reliable automatic evaluation. Existing methods either assess only one aspect of the task (i.e., textual alignment or subject preservation), misalign with human judgments, or rely on costly API-based evaluation. To address this gap, we introduce RefVNLI, a cost-effective metric that evaluates both textual alignment and subject preservation in a single run. Trained on a large-scale dataset derived from video-reasoning benchmarks and image perturbations, RefVNLI outperforms or statistically matches existing baselines across multiple benchmarks and subject categories (e.g., Animal, Object), achieving up to 6.4-point gains in textual alignment and 5.9-point gains in subject preservation.
pdf
bib
abs
DreamSync: Aligning Text-to-Image Generation with Image Understanding Feedback
Jiao Sun
|
Deqing Fu
|
Yushi Hu
|
Su Wang
|
Royi Rassin
|
Da-Cheng Juan
|
Dana Alon
|
Charles Herrmann
|
Sjoerd Van Steenkiste
|
Ranjay Krishna
|
Cyrus Rashtchian
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Despite their widespread success, Text-to-Image models (T2I) still struggle to produce images that are both aesthetically pleasing and faithful to the user’s input text. We introduce DreamSync, a simple yet effective training algorithm that improves T2I models to be faithful to the text input. DreamSync utilizes large vision-language models (VLMs) to effectively identify the fine-grained discrepancies between generated images and the text inputs and enable T2I models to self-improve without labeled data. First, it prompts the model to generate several candidate images for a given input text. Then, it uses two VLMs to select the best generation: a Visual Question Answering model that measures the alignment of generated images to the text, and another that measures the generation’s aesthetic quality. After selection, we use LoRA to iteratively finetune the T2I model to guide its generation towards the selected best generations. DreamSync does not need any additional human annotation, model architecture changes, or reinforcement learning. Despite its simplicity, DreamSync improves both the semantic alignment and aesthetic appeal of two diffusion-based T2I models, evidenced by multiple benchmarks (+1.7% on TIFA, +2.9% on DSG1K, +3.4% on VILA aesthetic) and human evaluation shows that DreamSync improves text rendering compared to SDXL by 18.5% on DSG1K benchmark.
2024
pdf
bib
Evaluating D-MERIT of Partial-annotation on Information Retrieval
Royi Rassin
|
Yaron Fairstein
|
Oren Kalinsky
|
Guy Kushilevitz
|
Nachshon Cohen
|
Alexander Libov
|
Yoav Goldberg
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
2023
pdf
bib
abs
Conjunct Resolution in the Face of Verbal Omissions
Royi Rassin
|
Yoav Goldberg
|
Reut Tsarfaty
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Verbal omissions are complex syntactic phenomena in VP coordination structures. They occur when verbs and (some of) their arguments are omitted from subsequent clauses after being explicitly stated in an initial clause. Recovering these omitted elements is necessary for accurate interpretation of the sentence, and while humans easily and intuitively fill in the missing information, state-of-the-art models continue to struggle with this task. Previous work is limited to small-scale datasets, synthetic data creation methods, and to resolution methods in the dependency-graph level. In this work we propose a conjunct resolution task that operates directly on the text and makes use of a split-and-rephrase paradigm in order to recover the missing elements in the coordination structure. To this end, we first formulate a pragmatic framework of verbal omissions which describes the different types of omissions, and develop an automatic scalable collection method. Based on this method, we curate a large dataset, containing over 10K examples of naturally-occurring verbal omissions with crowd-sourced annotations of the resolved conjuncts. We train various neural baselines for this task, and show that while our best method obtains decent performance, it leaves ample space for improvement. We propose our dataset, metrics and models as a starting point for future research on this topic.
2022
pdf
bib
abs
DALLE-2 is Seeing Double: Flaws in Word-to-Concept Mapping in Text2Image Models
Royi Rassin
|
Shauli Ravfogel
|
Yoav Goldberg
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
We study the way DALLE-2 maps symbols (words) in the prompt to their references (entities or properties of entities in the generated image). We show that in stark contrast to the way human process language, DALLE-2 does not follow the constraint that each word has a single role in the interpretation, and sometimes re-use the same symbol for different purposes. We collect a set of stimuli that reflect the phenomenon: we show that DALLE-2 depicts both senses of nouns with multiple senses at once; and that a given word can modify the properties of two distinct entities in the image, or can be depicted as one object and also modify the properties of another object, creating a semantic leakage of properties between entities. Taken together, our study highlights the differences between DALLE-2 and human language processing and opens an avenue for future study on the inductive biases of text-to-image models.