Guardians of Trust: Risks and Opportunities for LLMs in Mental Health

Miguel Baidal, Erik Derner, Nuria Oliver


Abstract
The integration of large language models (LLMs) into mental health applications offers promising opportunities for positive social impact. However, it also presents critical risks. While previous studies have often addressed these challenges and risks individually, a broader and multi-dimensional approach is still lacking. In this paper, we introduce a taxonomy of the main challenges related to the use of LLMs for mental health and propose a structured, comprehensive research agenda to mitigate them. We emphasize the need for explainable, emotionally aware, culturally sensitive, and clinically aligned systems, supported by continuous monitoring and human oversight. By placing our work within the broader context of natural language processing (NLP) for positive impact, this research contributes to ongoing efforts to ensure that technological advances in NLP responsibly serve vulnerable populations, fostering a future where mental health solutions improve rather than endanger well-being.
Anthology ID:
2025.nlp4pi-1.2
Volume:
Proceedings of the Fourth Workshop on NLP for Positive Impact (NLP4PI)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Katherine Atwell, Laura Biester, Angana Borah, Daryna Dementieva, Oana Ignat, Neema Kotonya, Ziyi Liu, Ruyuan Wan, Steven Wilson, Jieyu Zhao
Venues:
NLP4PI | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11–22
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.nlp4pi-1.2/
DOI:
10.18653/v1/2025.nlp4pi-1.2
Bibkey:
Cite (ACL):
Miguel Baidal, Erik Derner, and Nuria Oliver. 2025. Guardians of Trust: Risks and Opportunities for LLMs in Mental Health. In Proceedings of the Fourth Workshop on NLP for Positive Impact (NLP4PI), pages 11–22, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Guardians of Trust: Risks and Opportunities for LLMs in Mental Health (Baidal et al., NLP4PI 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.nlp4pi-1.2.pdf