Towards Cross-Lingual Explanation of Artwork in Large-scale Vision Language Models
Shintaro Ozaki, Kazuki Hayashi, Yusuke Sakai, Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe
Abstract
As the performance of Large-scale Vision Language Models (LVLMs) improves, they are increasingly capable of responding in multiple languages, and there is an expectation that the demand for explanations generated by LVLMs will grow. However, pre-training of Vision Encoder and the integrated training of LLMs with Vision Encoder are mainly conducted using English training data, leaving it uncertain whether LVLMs can completely handle their potential when generating explanations in languages other than English. In addition, multilingual QA benchmarks that create datasets using machine translation have cultural differences and biases, remaining issues for use as evaluation tasks. To address these challenges, this study created an extended dataset in multiple languages without relying on machine translation. This dataset that takes into account nuances and country-specific phrases was then used to evaluate the generation explanation abilities of LVLMs. Furthermore, this study examined whether Instruction-Tuning in resource-rich English improves performance in other languages. Our findings indicate that LVLMs perform worse in languages other than English compared to English. In addition, it was observed that LVLMs struggle to effectively manage the knowledge learned from English data.- Anthology ID:
- 2025.findings-naacl.209
- Volume:
- Findings of the Association for Computational Linguistics: NAACL 2025
- Month:
- April
- Year:
- 2025
- Address:
- Albuquerque, New Mexico
- Editors:
- Luis Chiruzzo, Alan Ritter, Lu Wang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3773–3809
- Language:
- URL:
- https://preview.aclanthology.org/Author-page-Marten-During-lu/2025.findings-naacl.209/
- DOI:
- Cite (ACL):
- Shintaro Ozaki, Kazuki Hayashi, Yusuke Sakai, Hidetaka Kamigaito, Katsuhiko Hayashi, and Taro Watanabe. 2025. Towards Cross-Lingual Explanation of Artwork in Large-scale Vision Language Models. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 3773–3809, Albuquerque, New Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Towards Cross-Lingual Explanation of Artwork in Large-scale Vision Language Models (Ozaki et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/Author-page-Marten-During-lu/2025.findings-naacl.209.pdf