Alessandro Sperduti


2026

High-quality annotated data is essential for training effective machine learning models, especially for fine-grained tasks like Named Entity Recognition (NER), where each token in a sentence must be tagged with a golden annotation. While Large Language Models (LLMs) show strong potential in automating data annotation, existing literature lacks extensive evaluations that systematically compare different models, embedding strategies, and context selection methods, particularly on complex, real-world datasets. This paper fills this gap by conducting a comprehensive study of LLMs for NER annotation across four diverse datasets. It benchmarks both proprietary and open-source LLMs at the 7B to 70B parameter scale, including a 32B reasoning-optimized model, and explores multiple context selection strategies. Two evaluations are performed: (i) the assessment of the practical utility of LLM-generated annotations by fine-tuning a RoBERTa model on LLM-generated annotations and measuring downstream performance; (ii) the assessment of only LLM-generated annotations using token-level metrics, like Precision, Recall, F1, and agreement with human annotations (Cohen’s κ). Empirical results, supported by statistical tests, highlight the importance of choosing suitable LLMs and embedding models and reveal key trade-offs between model scale and annotation quality. Challenging datasets like SKILLSPAN further expose the limitations of current LLM-based annotation pipelines, emphasizing the need for benchmarking on difficult, real-world tasks.

2024

Large Language Models (LLMs) have revolutionized the field of Natural Language Processing thanks to their ability to reuse knowledge acquired on massive text corpora on a wide variety of downstream tasks, with minimal (if any) tuning steps. At the same time, it has been repeatedly shown that LLMs lack systematic generalization, which allows to extrapolate the learned statistical regularities outside the training distribution. In this work, we offer a systematic benchmarking of GPT-4, one of the most advanced LLMs available, on three algorithmic tasks characterized by the possibility to control the problem difficulty with two parameters. We compare the performance of GPT-4 with that of its predecessor (GPT-3.5) and with a variant of the Transformer-Encoder architecture recently introduced to solve similar tasks, the Neural Data Router. We find that the deployment of advanced prompting techniques allows GPT-4 to reach superior accuracy on all tasks, demonstrating that state-of-the-art LLMs constitute a very strong baseline also in challenging tasks that require systematic generalization.