Evaluating Large Language Models on Social Signal Sensitivity: An Appraisal Theory Approach

Zhen Wu, Ritam Dutt, Carolyn Rose


Abstract
We present a framework to assess the sensitivity of Large Language Models (LLMs) to textually embedded social signals using an Appraisal Theory perspective. We report on an experiment that uses prompts encoding three dimensions of social signals: Affect, Judgment, and Appreciation. In response to the prompt, an LLM generates both an analysis (Insight) and a conversational Response, which are analyzed in terms of sensitivity to the signals. We quantitatively evaluate the output text through topical analysis of the Insight and predicted social intelligence scores of the Response in terms of empathy and emotional polarity. Key findings show that LLMs are more sensitive to positive signals. The personas impact Responses but not the Insight. We discuss how our framework can be extended to a broader set of social signals, personas, and scenarios to evaluate LLM behaviors under various conditions.
Anthology ID:
2024.hucllm-1.6
Volume:
Proceedings of the 1st Human-Centered Large Language Modeling Workshop
Month:
August
Year:
2024
Address:
TBD
Editors:
Nikita Soni, Lucie Flek, Ashish Sharma, Diyi Yang, Sara Hooker, H. Andrew Schwartz
Venues:
HuCLLM | WS
SIG:
Publisher:
ACL
Note:
Pages:
67–80
Language:
URL:
https://aclanthology.org/2024.hucllm-1.6
DOI:
Bibkey:
Cite (ACL):
Zhen Wu, Ritam Dutt, and Carolyn Rose. 2024. Evaluating Large Language Models on Social Signal Sensitivity: An Appraisal Theory Approach. In Proceedings of the 1st Human-Centered Large Language Modeling Workshop, pages 67–80, TBD. ACL.
Cite (Informal):
Evaluating Large Language Models on Social Signal Sensitivity: An Appraisal Theory Approach (Wu et al., HuCLLM-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.hucllm-1.6.pdf