Assessing the Reliability and Validity of GPT-4 in Annotating Emotion Appraisal Ratings

Deniss Ruder, Andero Uusberg, Kairit Sirts


Abstract
Appraisal theories suggest that emotions arise from subjective evaluations of events, referred to as appraisals. The taxonomy of appraisals is quite diverse, and they are usually given ratings on a Likert scale to be annotated in an experiencer-annotator or reader-annotator paradigm. This paper studies GPT-4 as a reader-annotator of 21 specific appraisal ratings in different prompt settings, aiming to evaluate and improve its performance compared to human annotators. We found that GPT-4 is an effective reader-annotator that performs close to or even slightly better than human annotators, and its results can be significantly improved by using a majority voting of five completions. GPT-4 also effectively predicts appraisal ratings and emotion labels using a single prompt, but adding instruction complexity results in poorer performance. We also found that longer event descriptions lead to more accurate annotations for both model and human annotator ratings. This work contributes to the growing usage of LLMs in psychology and the strategies for improving GPT-4 performance in annotating appraisals.
Anthology ID:
2025.clpsych-1.1
Volume:
Proceedings of the 10th Workshop on Computational Linguistics and Clinical Psychology (CLPsych 2025)
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Ayah Zirikly, Andrew Yates, Bart Desmet, Molly Ireland, Steven Bedrick, Sean MacAvaney, Kfir Bar, Yaakov Ophir
Venues:
CLPsych | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–11
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.clpsych-1.1/
DOI:
Bibkey:
Cite (ACL):
Deniss Ruder, Andero Uusberg, and Kairit Sirts. 2025. Assessing the Reliability and Validity of GPT-4 in Annotating Emotion Appraisal Ratings. In Proceedings of the 10th Workshop on Computational Linguistics and Clinical Psychology (CLPsych 2025), pages 1–11, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Assessing the Reliability and Validity of GPT-4 in Annotating Emotion Appraisal Ratings (Ruder et al., CLPsych 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.clpsych-1.1.pdf