Maximiliano Hormazábal Lagos
2025
MRT at SemEval-2025 Task 8: Maximizing Recovery from Tables with Multiple Steps
Maximiliano Hormazábal Lagos
|
Álvaro Bueno Sáez
|
Héctor Cerezo - Costas
|
Pedro Alonso Doval
|
Jorge Alcalde Vesteiro
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
In this paper we expose our approach to solve the SemEval 2025 Task 8: Question-Answering over Tabular Data challenge. Our strategy leverages Python code generation with LLMs to interact with the table and get the answer to the questions. The process is composed of multiple steps: understanding the content of the table, generating natural language instructions in the form of steps to follow in order to get the answer, translating these instructions to code, running it and handling potential errors or exceptions. These steps use open source LLMs and fine grained optimized prompts for each task (step). With this approach, we achieved a score of 70.50% for subtask 1.
COGUMELO at SemEval-2025 Task 3: A Synthetic Approach to Detecting Hallucinations in Language Models based on Named Entity Recognition
Aldan Creo
|
Héctor Cerezo - Costas
|
Maximiliano Hormazábal Lagos
|
Pedro Alonso Doval
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
In this paper, we propose an approach to detecting hallucinations based on a Named Entity Recognition (NER) task.We focus on efficiency, aiming to develop a model that can detect hallucinations without relying on external data sources or expensive computations that involve state-of-the-art large language models with upwards of tens of billions of parameters. We utilize the SQuAD question answering dataset to generate a synthetic version that contains both correct and hallucinated responses and train encoder language models of a moderate size (RoBERTa and FLAN-T5) to predict spans of text that are highly likely to contain a hallucination. We test our models on a separate dataset of expert-annotated question-answer pairs and find that our approach achieves a Jaccard similarity of up to 0.358 and 0.227 Spearman correlation, which suggests that our models can serve as moderately accurate hallucination detectors, ideally as part of a detection pipeline involving human supervision. We also observe that larger models seem to develop an emergent ability to leverage their background knowledge to make more informed decisions, while smaller models seem to take shortcuts that can lead to a higher number of false positives.We make our data and code publicly accessible, along with an online visualizer. We also release our trained models under an open license.