Unlocking LLMs: Addressing Scarce Data and Bias Challenges in Mental Health and Therapeutic Counselling
Vivek Kumar, Pushpraj Singh Rajwat, Giacomo Medda, Eirini Ntoutsi, Diego Reforgiato Recupero
Abstract
abstract Large language models (LLMs) have shown promising capabilities in healthcare analysis but face several challenges like hallucinations, parroting, and bias manifestation. These challenges are exacerbated in complex, sensitive, and low-resource domains. Therefore, in this work, we introduce IC-AnnoMI, an expert-annotated motivational interviewing (MI) dataset built upon AnnoMI, by generating in-context conversational dialogues leveraging LLMs, particularly ChatGPT. IC-AnnoMI employs targeted prompts accurately engineered through cues and tailored information, taking into account therapy style (empathy, reflection), contextual relevance, and false semantic change. Subsequently, the dialogues are annotated by experts, strictly adhering to the Motivational Interviewing Skills Code (MISC), focusing on both the psychological and linguistic dimensions of MI dialogues. We comprehensively evaluate the IC-AnnoMI dataset and ChatGPT’s emotional reasoning ability and understanding of domain intricacies by modeling novel classification tasks employing several classical machine learning and current state-of-the-art transformer approaches. Finally, we discuss the effects of progressive prompting strategies and the impact of augmented data in mitigating the biases manifested in IC-AnnoM. Our contributions provide the MI community with not only a comprehensive dataset but also valuable insights for using LLMs in empathetic text generation for conversational therapy in supervised settings.- Anthology ID:
- 2024.nlpaics-1.26
- Volume:
- Proceedings of the First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security
- Month:
- July
- Year:
- 2024
- Address:
- Lancaster, UK
- Editors:
- Ruslan Mitkov, Saad Ezzini, Tharindu Ranasinghe, Ignatius Ezeani, Nouran Khallaf, Cengiz Acarturk, Matthew Bradbury, Mo El-Haj, Paul Rayson
- Venue:
- NLPAICS
- SIG:
- Publisher:
- International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security
- Note:
- Pages:
- 238–251
- Language:
- URL:
- https://preview.aclanthology.org/fix-sig-urls/2024.nlpaics-1.26/
- DOI:
- Cite (ACL):
- Vivek Kumar, Pushpraj Singh Rajwat, Giacomo Medda, Eirini Ntoutsi, and Diego Reforgiato Recupero. 2024. Unlocking LLMs: Addressing Scarce Data and Bias Challenges in Mental Health and Therapeutic Counselling. In Proceedings of the First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security, pages 238–251, Lancaster, UK. International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security.
- Cite (Informal):
- Unlocking LLMs: Addressing Scarce Data and Bias Challenges in Mental Health and Therapeutic Counselling (Kumar et al., NLPAICS 2024)
- PDF:
- https://preview.aclanthology.org/fix-sig-urls/2024.nlpaics-1.26.pdf