@inproceedings{gupta-etal-2025-exploring,
    title = "Exploring Multimodal Language Models for Sustainability Disclosure Extraction: A Comparative Study",
    author = "Gupta, Tanay  and
      Goel, Tushar  and
      Verma, Ishan",
    editor = "Drozd, Aleksandr  and
      Sedoc, Jo{\~a}o  and
      Tafreshi, Shabnam  and
      Akula, Arjun  and
      Shu, Raphael",
    booktitle = "The Sixth Workshop on Insights from Negative Results in NLP",
    month = may,
    year = "2025",
    address = "Albuquerque, New Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.insights-1.13/",
    doi = "10.18653/v1/2025.insights-1.13",
    pages = "141--149",
    ISBN = "979-8-89176-240-4",
    abstract = "Sustainability metrics have increasingly become a crucial non-financial criterion in investment decision-making. Organizations worldwide are recognizing the importance of sustainability and are proactively highlighting their efforts through specialized sustainability reports. Unlike traditional annual reports, these sustainability disclosures are typically text-heavy and are often expressed as infographics, complex tables, and charts. The non-machine-readable nature of these reports presents a significant challenge for efficient information extraction. The rapid advancement of Vision Language Models (VLMs) has raised the question whether these VLMs can address such challenges in domain specific task. In this study, we demonstrate the application of VLMs for extracting sustainability information from dedicated sustainability reports. Our experiments highlight the limitations in the performance of several open-source VLMs in extracting information about sustainability disclosures from different type of pages."
}Markdown (Informal)
[Exploring Multimodal Language Models for Sustainability Disclosure Extraction: A Comparative Study](https://preview.aclanthology.org/ingest-emnlp/2025.insights-1.13/) (Gupta et al., insights 2025)
ACL