A Simple Data Augmentation Strategy for Text-in-Image Scientific VQA

Belal Shoer, Yova Kementchedjhieva


Abstract
Scientific visual question answering poses significant challenges for vision-language models due to the complexity of scientific figures and their multimodal context. Traditional approaches treat the figure and accompanying text (e.g., questions and answer options) as separate inputs. EXAMS-V introduced a new paradigm by embedding both visual and textual content into a single image. However, even state-of-the-art proprietary models perform poorly on this setup in zero-shot settings, underscoring the need for task-specific fine-tuning. To address the scarcity of training data in this “text-in-image” format, we synthesize a new dataset by converting existing separate image-text pairs into unified images. Fine-tuning a small multilingual multimodal model on a mix of our synthetic data and EXAMS-V yields notable gains across 13 languages, demonstrating strong average improvements and cross-lingual transfer.
Anthology ID:
2025.winlp-main.17
Volume:
Proceedings of the 9th Widening NLP Workshop
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Chen Zhang, Emily Allaway, Hua Shen, Lesly Miculicich, Yinqiao Li, Meryem M'hamdi, Peerat Limkonchotiwat, Richard He Bai, Santosh T.y.s.s., Sophia Simeng Han, Surendrabikram Thapa, Wiem Ben Rim
Venues:
WiNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
100–105
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.winlp-main.17/
DOI:
Bibkey:
Cite (ACL):
Belal Shoer and Yova Kementchedjhieva. 2025. A Simple Data Augmentation Strategy for Text-in-Image Scientific VQA. In Proceedings of the 9th Widening NLP Workshop, pages 100–105, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
A Simple Data Augmentation Strategy for Text-in-Image Scientific VQA (Shoer & Kementchedjhieva, WiNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.winlp-main.17.pdf