André Carreiro

Also published as: Andre Carreiro


2025

pdf bib
Benchmarking Table Extraction: Multimodal LLMs vs Traditional OCR
Guilherme Nunes | Vitor Rolla | Duarte Pereira | Vasco Alves | Andre Carreiro | Márcia Baptista
Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)

This paper compares two approaches for table extraction from images: deep learning computer vision and Multimodal Large Language Models (MLLMs). Computer vision models for table extraction, such as the Table Transformer model (TATR), have enhanced the extraction of complex table structural layouts by leveraging deep learning for precise structural recognition combined with traditional Optical Character Recognition (OCR). Conversely, MLLMs, which process both text and image inputs, present a novel approach by potentially bypassing the limitations of TATR plus OCR methods altogether. Models such as GPT-4o, Phi-3 Vision, and Granite Vision 3.2 demonstrate the potential of MLLMs to analyze and interpret table images directly, offering enhanced accuracy and robust extraction capabilities. A state-of-the-art metric like Grid Table Similarity (GriTS) evaluated these methodologies, providing nuanced insights into structural and text content effectiveness. Utilizing the PubTables-1M dataset, a comprehensive and widely used benchmark in the field, this study highlights the strengths and limitations of each approach, setting the stage for future innovations in table extraction technologies. Deep learning computer vision techniques still have a slight edge when extracting table structural layout, but in terms of text cell content, MLLMs are far better.

2024

pdf bib
Unlocking the Potential of Large Language Models for Clinical Text Anonymization: A Comparative Study
David Pissarra | Isabel Curioso | João Alveira | Duarte Pereira | Bruno Ribeiro | Tomás Souper | Vasco Gomes | André Carreiro | Vitor Rolla
Proceedings of the Fifth Workshop on Privacy in Natural Language Processing

Automated clinical text anonymization has the potential to unlock the widespread sharing of textual health data for secondary usage while assuring patient privacy. Despite the proposal of many complex and theoretically successful anonymization solutions in literature, these techniques remain flawed. As such, clinical institutions are still reluctant to apply them for open access to their data. Recent advances in developing Large Language Models (LLMs) pose a promising opportunity to further the field, given their capability to perform various tasks. This paper proposes six new evaluation metrics tailored to the challenges of generative anonymization with LLMs. Moreover, we present a comparative study of LLM-based methods, testing them against two baseline techniques. Our results establish LLM-based models as a reliable alternative to common approaches, paving the way toward trustworthy anonymization of clinical text.

pdf bib
Anonymization Through Substitution: Words vs Sentences
Vasco Alves | Vitor Rolla | João Alveira | David Pissarra | Duarte Pereira | Isabel Curioso | André Carreiro | Henrique Lopes Cardoso
Proceedings of the Fifth Workshop on Privacy in Natural Language Processing

Anonymization of clinical text is crucial to allow the sharing and disclosure of health records while safeguarding patient privacy. However, automated anonymization processes are still highly limited in healthcare practice, as these systems cannot assure the anonymization of all private information. This paper explores the application of a novel technique that guarantees the removal of all sensitive information through the usage of text embeddings obtained from a de-identified dataset, replacing every word or sentence of a clinical note. We analyze the performance of different embedding techniques and models by evaluating them using recently proposed evaluation metrics. The results demonstrate that sentence replacement is better at keeping relevant medical information untouched, while the word replacement strategy performs better in terms of anonymization sensitivity.