Does “Reasoning” with Large Language Models Improve Recognizing, Generating and Reframing Unhelpful Thoughts?

Yilin Qi, Dong Won Lee, Cynthia Breazeal, Hae Won Park


Abstract
Cognitive Reframing, a core element of Cognitive Behavioral Therapy (CBT), helps individuals reinterpret negative experiences by finding positive meaning. Recent advances in Large Language Models (LLMs) have demonstrated improved performance through reasoning-based strategies. This inspires a promising direction of leveraging the reasoning capabilities of LLMs to improve CBT and mental reframing by simulating the process of critical thinking, potentially enabling more effective recognition, generation and reframing of cognitive distortions. In this work, we investigate the role of various reasoning methods, including pre-trained reasoning LLMs, such as DeepSeek-R1, and augmented reasoning strategies, such as CoT (Wei et al., 2022) and self-consistency (Wang et al., 2022), in enhancing LLMs’ ability to perform cognitive reframing tasks. We find that augmented reasoning methods, even when applied to older LLMs like GPT-3.5, consistently outperform state-of- the-art pretrained reasoning models such as DeepSeek-R1 (Guo et al., 2025) and o1 (Jaech et al., 2024) on recognizing, generating and reframing unhelpful thoughts.
Anthology ID:
2025.nlp4pi-1.5
Volume:
Proceedings of the Fourth Workshop on NLP for Positive Impact (NLP4PI)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Katherine Atwell, Laura Biester, Angana Borah, Daryna Dementieva, Oana Ignat, Neema Kotonya, Ziyi Liu, Ruyuan Wan, Steven Wilson, Jieyu Zhao
Venues:
NLP4PI | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
62–69
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.nlp4pi-1.5/
DOI:
Bibkey:
Cite (ACL):
Yilin Qi, Dong Won Lee, Cynthia Breazeal, and Hae Won Park. 2025. Does “Reasoning” with Large Language Models Improve Recognizing, Generating and Reframing Unhelpful Thoughts?. In Proceedings of the Fourth Workshop on NLP for Positive Impact (NLP4PI), pages 62–69, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Does “Reasoning” with Large Language Models Improve Recognizing, Generating and Reframing Unhelpful Thoughts? (Qi et al., NLP4PI 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.nlp4pi-1.5.pdf