Korbinian Randl


2025

pdf bib
SemEval-2025 Task 9: The Food Hazard Detection Challenge
Korbinian Randl | John Pavlopoulos | Aron Henriksson | Tony Lindgren | Juli Bakagianni
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)

In this challenge, we explored text-based food hazard prediction with long tail distributed classes. The task was divided into two subtasks: (1) predicting whether a web text implies one of ten food-hazard categories and identifying the associated food category, and (2) providing a more fine-grained classification by assigning a specific label to both the hazard and the product. Our findings highlight that large language model-generated synthetic data can be highly effective for oversampling long-tail distributions. Furthermore, we find that fine-tuned encoder-only, encoder-decoder, and decoder-only systems achieve comparable maximum performance across both subtasks. During this challenge, we are gradually releasing (under CC BY-NC-SA 4.0) a novel set of 6,644 manually labeled food-incident reports.

2024

pdf bib
CICLe: Conformal In-Context Learning for Largescale Multi-Class Food Risk Classification
Korbinian Randl | John Pavlopoulos | Aron Henriksson | Tony Lindgren
Findings of the Association for Computational Linguistics: ACL 2024

Contaminated or adulterated food poses a substantial risk to human health. Given sets of labeled web texts for training, Machine Learning and Natural Language Processing can be applied to automatically detect such risks. We publish a dataset of 7,546 short texts describing public food recall announcements. Each text is manually labeled, on two granularity levels (coarse and fine), for food products and hazards that the recall corresponds to. We describe the dataset and benchmark naive, traditional, and Transformer models. Based on our analysis, Logistic Regression based on a TF-IDF representation outperforms RoBERTa and XLM-R on classes with low support. Finally, we discuss different prompting strategies and present an LLM-in-the-loop framework, based on Conformal Prediction, which boosts the performance of the base classifier while reducing energy consumption compared to normal prompting.