Luca Gioacchini


2025

pdf bib
BitsAndBites at SemEval-2025 Task 9: Improving Food Hazard Detection with Sequential Multitask Learning and Large Language Models
Aurora Gensale | Irene Benedetto | Luca Gioacchini | Luca Cagliero | Alessio Bosca
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)

Automatic and early detection of foodborne hazards is crucial for preventing outbreaks. Existing AI-based solutions often struggle with the complexity and noise of food recall reports and overcome the dependency between product and hazard labels. We introduce a methodology to classify reports on food-related incidents to address these challenges. Our approach leverages LLM-based information extraction to minimize report variability, alongside a two-stage classification pipeline. The first model assigns coarse-grained labels, narrowing the space of eligible fine-grained labels for the second model. This sequential process allows us to capture hierarchical label dependencies between products and hazards and their respective categories. Additionally, we design each model with two classification heads relying on the inherent relations between food products and associated hazards. We validate our approach on two multi-label classification sub-tasks. Experimental results demonstrate the effectiveness of our approach, achieving an improvement of +30% and +40% in classification performance compared to the baseline.

2024

pdf bib
AgentQuest: A Modular Benchmark Framework to Measure Progress and Improve LLM Agents
Luca Gioacchini | Giuseppe Siracusano | Davide Sanvito | Kiril Gashteovski | David Friede | Roberto Bifulco | Carolin Lawrence
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: System Demonstrations)

The advances made by Large Language Models (LLMs) have led to the pursuit of LLM agents that can solve intricate, multi-step reasoning tasks. As with any research pursuit, benchmarking and evaluation are key corner stones to efficient and reliable progress. However, existing benchmarks are often narrow and simply compute overall task success. To face these issues, we propose AgentQuest – a framework where (i) both benchmarks and metrics are modular and easily extensible through well documented and easy-to-use APIs; (ii) we offer two new evaluation metrics that can reliably track LLM agent progress while solving a task. We exemplify the utility of the metrics on two use cases wherein we identify common failure points and refine the agent architecture to obtain a significant performance increase. Together with the research community, we hope to extend AgentQuest further and therefore we make it available under https://github.com/nec-research/agentquest.