ELLIS Alicante at CQs-Gen 2025: Winning the critical thinking questions shared task: LLM-based question generation and selection

Lucile Favero, Daniel Frases, Juan Antonio Pérez-Ortiz, Tanja Käser


Abstract
The widespread adoption of chat interfaces based on Large Language Models (LLMs) raises concerns about promoting superficial learning and undermining the development of critical thinking skills. Instead of relying on LLMs purely for retrieving factual information, this work explores their potential to foster deeper reasoning by generating critical questions that challenge unsupported or vague claims in debate interventions. This study is part of a shared task of the 12th Workshop on Argument Mining, co-located with ACL 2025, focused on automatic critical question generation. We propose a two-step framework involving two small-scale open source language models: a Questioner that generates multiple candidate questions and a Judge that selects the most relevant ones. Our system ranked first in the shared task competition, demonstrating the potential of the proposed LLM-based approach to encourage critical engagement with argumentative texts.
Anthology ID:
2025.argmining-1.31
Volume:
Proceedings of the 12th Argument mining Workshop
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Elena Chistova, Philipp Cimiano, Shohreh Haddadan, Gabriella Lapesa, Ramon Ruiz-Dolz
Venues:
ArgMining | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
322–331
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.argmining-1.31/
DOI:
10.18653/v1/2025.argmining-1.31
Bibkey:
Cite (ACL):
Lucile Favero, Daniel Frases, Juan Antonio Pérez-Ortiz, and Tanja Käser. 2025. ELLIS Alicante at CQs-Gen 2025: Winning the critical thinking questions shared task: LLM-based question generation and selection. In Proceedings of the 12th Argument mining Workshop, pages 322–331, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
ELLIS Alicante at CQs-Gen 2025: Winning the critical thinking questions shared task: LLM-based question generation and selection (Favero et al., ArgMining 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.argmining-1.31.pdf