Tdnguyen at CQs-Gen 2025: Adapt Large Language Models with Multi-Step Reasoning for Critical Questions Generation

Tien-Dat Nguyen, Duc-Vu Nguyen


Abstract
This paper explores the generation of Critical Questions (CQs) from argumentative texts using multi-step reasoning techniques, specifically Chain-of-Thoughts (CoT) and Tree-of-Thoughts (ToT) prompting frameworks. CQs are essential for enhancing critical thinking and improving decision-making across various domains. Despite the promise of Large Language Models (LLMs) in this task, generating contextually relevant and logically sound questions remains a challenge. Our experiments show that CoT-based prompting strategies, including Zero-shot and One-shot methods, significantly outperform baseline models in generating high-quality CQs. While ToT prompting offers a more flexible reasoning structure, it was less effective than CoT in this task. We suggest exploring more advanced or computationally intense multi-step reasoning techniques, as well as alternative tree structures for the ToT framework, to further improve CQs-Gen systems.
Anthology ID:
2025.argmining-1.25
Volume:
Proceedings of the 12th Argument mining Workshop
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Elena Chistova, Philipp Cimiano, Shohreh Haddadan, Gabriella Lapesa, Ramon Ruiz-Dolz
Venues:
ArgMining | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
265–280
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.argmining-1.25/
DOI:
10.18653/v1/2025.argmining-1.25
Bibkey:
Cite (ACL):
Tien-Dat Nguyen and Duc-Vu Nguyen. 2025. Tdnguyen at CQs-Gen 2025: Adapt Large Language Models with Multi-Step Reasoning for Critical Questions Generation. In Proceedings of the 12th Argument mining Workshop, pages 265–280, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Tdnguyen at CQs-Gen 2025: Adapt Large Language Models with Multi-Step Reasoning for Critical Questions Generation (Nguyen & Nguyen, ArgMining 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.argmining-1.25.pdf