Benchmarking GPT-4 on Algorithmic Problems: A Systematic Evaluation of Prompting Strategies

Flavio Petruzzellis, Alberto Testolin, Alessandro Sperduti


Abstract
Large Language Models (LLMs) have revolutionized the field of Natural Language Processing thanks to their ability to reuse knowledge acquired on massive text corpora on a wide variety of downstream tasks, with minimal (if any) tuning steps. At the same time, it has been repeatedly shown that LLMs lack systematic generalization, which allows to extrapolate the learned statistical regularities outside the training distribution. In this work, we offer a systematic benchmarking of GPT-4, one of the most advanced LLMs available, on three algorithmic tasks characterized by the possibility to control the problem difficulty with two parameters. We compare the performance of GPT-4 with that of its predecessor (GPT-3.5) and with a variant of the Transformer-Encoder architecture recently introduced to solve similar tasks, the Neural Data Router. We find that the deployment of advanced prompting techniques allows GPT-4 to reach superior accuracy on all tasks, demonstrating that state-of-the-art LLMs constitute a very strong baseline also in challenging tasks that require systematic generalization.
Anthology ID:
2024.lrec-main.195
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
2161–2177
Language:
URL:
https://aclanthology.org/2024.lrec-main.195
DOI:
Bibkey:
Cite (ACL):
Flavio Petruzzellis, Alberto Testolin, and Alessandro Sperduti. 2024. Benchmarking GPT-4 on Algorithmic Problems: A Systematic Evaluation of Prompting Strategies. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 2161–2177, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Benchmarking GPT-4 on Algorithmic Problems: A Systematic Evaluation of Prompting Strategies (Petruzzellis et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.lrec-main.195.pdf