Using Large Language Models (LLMs) to Extract Evidence from Pre-Annotated Social Media Data

Falwah Alhamed, Julia Ive, Lucia Specia


Abstract
For numerous years, researchers have employed social media data to gain insights into users’ mental health. Nevertheless, the majority of investigations concentrate on categorizing users into those experiencing depression and those considered healthy, or on detection of suicidal thoughts. In this paper, we aim to extract evidence of a pre-assigned gold label. We used a suicidality dataset containing Reddit posts labeled with the suicide risk level. The task is to use Large Language Models (LLMs) to extract evidence from the post that justifies the given label. We used Meta Llama 7b and lexicons for solving the task and we achieved a precision of 0.96.
Anthology ID:
2024.clpsych-1.22
Volume:
Proceedings of the 9th Workshop on Computational Linguistics and Clinical Psychology (CLPsych 2024)
Month:
March
Year:
2024
Address:
St. Julians, Malta
Editors:
Andrew Yates, Bart Desmet, Emily Prud’hommeaux, Ayah Zirikly, Steven Bedrick, Sean MacAvaney, Kfir Bar, Molly Ireland, Yaakov Ophir
Venues:
CLPsych | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
232–237
Language:
URL:
https://aclanthology.org/2024.clpsych-1.22
DOI:
Bibkey:
Cite (ACL):
Falwah Alhamed, Julia Ive, and Lucia Specia. 2024. Using Large Language Models (LLMs) to Extract Evidence from Pre-Annotated Social Media Data. In Proceedings of the 9th Workshop on Computational Linguistics and Clinical Psychology (CLPsych 2024), pages 232–237, St. Julians, Malta. Association for Computational Linguistics.
Cite (Informal):
Using Large Language Models (LLMs) to Extract Evidence from Pre-Annotated Social Media Data (Alhamed et al., CLPsych-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2024.clpsych-1.22.pdf