Kesley Cheng
2026
The Moral Foundations Reddit Corpus
Jackson P. Trager | Alireza S. Ziabari | Elnaz Rahmati | Aida Mostafazadeh Davani | Preni Golazizian | Farzan Karimi-Malekabadi | Ali Omrani | Zhihe Li | Brendan Kennedy | Georgios Chochlakis | Nils Karl Reimer | Melissa Reyes | Kesley Cheng | Mellow Wei | Christina Merrifield | Arta Khosravi | Evans Alvarez | Morteza Dehghani
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Jackson P. Trager | Alireza S. Ziabari | Elnaz Rahmati | Aida Mostafazadeh Davani | Preni Golazizian | Farzan Karimi-Malekabadi | Ali Omrani | Zhihe Li | Brendan Kennedy | Georgios Chochlakis | Nils Karl Reimer | Melissa Reyes | Kesley Cheng | Mellow Wei | Christina Merrifield | Arta Khosravi | Evans Alvarez | Morteza Dehghani
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Moral framing and sentiment can affect a variety of online and offline behaviors, including donation, environmental action, political engagement, and protest. Various computational methods in Natural Language Processing (NLP) have been used to detect moral sentiment from textual data, but achieving strong performance in such subjective tasks requires large, hand-annotated datasets. Previous corpora annotated for moral sentiment have proven valuable and have generated new insights both within NLP and across the social sciences, but have been limited to Twitter. To facilitate improving our understanding of the role of moral rhetoric, we present the Moral Foundations Reddit Corpus, a collection of 16,123 English Reddit comments that have been curated from 12 distinct subreddits, hand-annotated by at least three trained annotators for 8 categories of moral sentiment (i.e., Care, Proportionality, Equality, Purity, Authority, Loyalty, Thin Morality, Implicit/Explicit Morality) based on the updated Moral Foundations Theory (MFT) framework. We evaluate baselines using large language models (Llama3-8B, Ministral-8B) in zero-shot, few-shot, and PEFT (Parameter-Efficient Fine-Tuning) settings, comparing their performance to fine-tuned encoder-only models like BERT (Bidirectional Encoder Representations from Transformers). The results show that LLMs continue to lag behind fine-tuned encoders on this subjective task, underscoring the ongoing need for human-annotated moral corpora for AI alignment evaluation