Overview of the Critical Questions Generation Shared Task

Blanca Calvo Figueras, Rodrigo Agerri, Maite Heredia, Jaione Bengoetxea, Elena Cabrio, Serena Villata


Abstract
The proliferation of AI technologies has reinforced the importance of developing critical thinking skills. We propose leveraging Large Language Models (LLMs) to facilitate the generation of critical questions: inquiries designed to identify fallacious or inadequately constructed arguments. This paper presents an overview of the first shared task on Critical Questions Generation (CQs-Gen). Thirteen teams investigated various methodologies for generating questions that critically assess arguments within the provided texts. The highest accuracy achieved was 67.6, indicating substantial room for improvement in this task. Moreover, three of the four top-performing teams incorporated argumentation scheme annotations to enhance their systems. Finally, while most participants employed open-weight models, the two highest-ranking teams relied on proprietary LLMs.
Anthology ID:
2025.argmining-1.23
Volume:
Proceedings of the 12th Argument mining Workshop
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Elena Chistova, Philipp Cimiano, Shohreh Haddadan, Gabriella Lapesa, Ramon Ruiz-Dolz
Venues:
ArgMining | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
243–257
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.argmining-1.23/
DOI:
10.18653/v1/2025.argmining-1.23
Bibkey:
Cite (ACL):
Blanca Calvo Figueras, Rodrigo Agerri, Maite Heredia, Jaione Bengoetxea, Elena Cabrio, and Serena Villata. 2025. Overview of the Critical Questions Generation Shared Task. In Proceedings of the 12th Argument mining Workshop, pages 243–257, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Overview of the Critical Questions Generation Shared Task (Calvo Figueras et al., ArgMining 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.argmining-1.23.pdf