@inproceedings{simplicio-etal-2024-v,
    title = "{V}-{G}l{\'o}r{IA} - Customizing Large Vision and Language Models to {E}uropean {P}ortuguese",
    author = "Simpl{\'i}cio, Afonso  and
      Semedo, David  and
      Magalhaes, Joao",
    editor = "Kumar, Sachin  and
      Balachandran, Vidhisha  and
      Park, Chan Young  and
      Shi, Weijia  and
      Hayati, Shirley Anugrah  and
      Tsvetkov, Yulia  and
      Smith, Noah  and
      Hajishirzi, Hannaneh  and
      Kang, Dongyeop  and
      Jurgens, David",
    booktitle = "Proceedings of the 1st Workshop on Customizable NLP: Progress and Challenges in Customizing NLP for a Domain, Application, Group, or Individual (CustomNLP4U)",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.customnlp4u-1.24/",
    doi = "10.18653/v1/2024.customnlp4u-1.24",
    pages = "317--326",
    abstract = "Generative Vision and Language models have obtained remarkable results recently, thanks to the use of robust pre-trained Visual encoders and Large Language Models (LLMs), together with efficient model adaptation training strategies, requiring minimal architecturalmodifications, while preserving LLMs' original capabilities. With these advances focusing mainly on the English language, there is a gap in customization methodologies for other languages. In this paper, we propose a customization methodology that adapts existingstate-of-the-art vision and language architectures to European Portuguese (PT-PT). As a result of applying this methodology, we introduce V-Gl{\'o}rIA , the first Large Vision and Language generative model specifically customized for European Portuguese. V-Gl{\'o}rIA supports multimodal tasks such as image captioning, retrieval, and dialogue. To deliver V-Gl{\'o}rIA, we leverage state-of-the-art V{\&}L architectures, and contribute with PT-PT machine-translated pre-training (CC3M PT-PT) and benchmark (MSCOCO PT-PT and VisDial PT-PT) datasets.Our experiments show that V-Gl{\'o}rIA delivers promising performance in text-image retrieval and downstream tasks in a zero-shot setting, such as image captioning and visual dialogue tasks, highlighting the effectiveness of our customization approach."
}