Transferring Textual Preferences to Vision-Language Understanding through Model Merging

Chen-An Li, Tzu-Han Lin, Yun-Nung Chen, Hung-yi Lee


Abstract
Large vision-language models (LVLMs) perform outstandingly across various multimodal tasks. However, their ability to evaluate generated content remains limited, and training vision-language reward models (VLRMs) with preference data is computationally expensive. This paper explores a training-free alternative by merging text-based reward models (RMs) with LVLMs to create VLRMs. Our approach shows that integrating these models leads to improved performance over LVLMs’ scoring and text-based RMs, offering an efficient method for incorporating textual preferences into LVLMs.
Anthology ID:
2025.acl-short.72
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
923–943
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-short.72/
DOI:
Bibkey:
Cite (ACL):
Chen-An Li, Tzu-Han Lin, Yun-Nung Chen, and Hung-yi Lee. 2025. Transferring Textual Preferences to Vision-Language Understanding through Model Merging. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 923–943, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Transferring Textual Preferences to Vision-Language Understanding through Model Merging (Li et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-short.72.pdf