Saayan Mitra
2025
VISIAR: Empower MLLM for Visual Story Ideation
Zhaoyang Xia
|
Somdeb Sarkhel
|
Mehrab Tanjim
|
Stefano Petrangeli
|
Ishita Dasgupta
|
Yuxiao Chen
|
Jinxuan Xu
|
Di Liu
|
Saayan Mitra
|
Dimitris N. Metaxas
Findings of the Association for Computational Linguistics: ACL 2025
Ideation, the process of forming ideas from concepts, is a big part of the content creation process. However, the noble goal of helping visual content creators by suggesting meaningful sequences of visual assets from a limited collection is challenging. It requires a nuanced understanding of visual assets and the integration of open-world knowledge to support creative exploration. Despite its importance, this task has yet to be explored fully in existing literature. To fill this gap, we propose Visual Story Ideation, a novel and underexplored task focused on the automated selection and arrangement of visual assets into coherent sequences that convey expressive storylines.We also present VISIAR, Visual Ideation through Sequence Integration and Asset Rearrangement, a robust framework leveraging Multimodal Large Language Models (MLLMs), and a novel Story Graph mechanism. Our framework operates in three key stages: visual content understanding, candidate asset selection, and asset rearrangement via MLLMs. In addition, we curated a new benchmark dataset, called VTravel, to evaluate our methods both qualitatively and quantitatively.User studies and GPT-as-the-judge evaluation show that our approach surpasses GPT-4o based baseline by an average of 33.5% and 18.5% across three different metrics, demonstrating the effectiveness of our framework for generating compelling visual stories.
2024
TAME-RD: Text Assisted Replication of Image Multi-Adjustments for Reverse Designing
Pooja Guhan
|
Uttaran Bhattacharya
|
Somdeb Sarkhel
|
Vahid Azizi
|
Xiang Chen
|
Saayan Mitra
|
Aniket Bera
|
Dinesh Manocha
Findings of the Association for Computational Linguistics: ACL 2024
Given a source and its edited version performed based on human instructions in natural language, how do we extract the underlying edit operations, to automatically replicate similar edits on other images? This is the problem of reverse designing, and we present TAME-RD, a model to solve this problem. TAME-RD automatically learns from the complex interplay of image editing operations and the natural language instructions to learn fully specified edit operations. It predicts both the underlying image edit operations as discrete categories and their corresponding parameter values in the continuous space.We accomplish this by mapping together the contextual information from the natural language text and the structural differences between the corresponding source and edited images using the concept of pre-post effect. We demonstrate the efficiency of our network through quantitative evaluations on multiple datasets. We observe improvements of 6-10% on various accuracy metrics and 1.01X-4X on the RMSE score and the concordance correlation coefficient for the corresponding parameter values on the benchmark GIER dataset. We also introduce I-MAD, a new two-part dataset: I-MAD-Dense, a collection of approximately 100K source and edited images, together with automatically generated text instructions and annotated edit operations, and I-MAD-Pro, consisting of about 1.6K source and edited images, together with text instructions and annotated edit operations provided by professional editors. On our dataset, we observe absolute improvements of 1-10% on the accuracy metrics and 1.14X–5X on the RMSE score.
Search
Fix author
Co-authors
- Somdeb Sarkhel 2
- Vahid Azizi 1
- Aniket Bera 1
- Uttaran Bhattacharya 1
- Xiang Chen 1
- show all...