Nicholas Clark


2025

pdf bib
Mind the Value-Action Gap: Do LLMs Act in Alignment with Their Values?
Hua Shen | Nicholas Clark | Tanu Mitra
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Existing research assesses LLMs’ values by analyzing their stated inclinations, overlooking potential discrepancies between stated values and actions—termed the “Value-Action Gap.” This study introduces ValueActionLens, a framework to evaluate the alignment between LLMs’ stated values and their value-informed actions. The framework includes a dataset of 14.8k value-informed actions across 12 cultures and 11 social topics, along with two tasks measuring alignment through three metrics. Experiments show substantial misalignment between LLM-generated value statements and their actions, with significant variations across scenarios and models. Misalignments reveal potential harms, highlighting risks in relying solely on stated values to predict behavior. The findings stress the need for context-aware evaluations of LLM values and the value-action gaps.

pdf bib
ValueCompass: A Framework for Measuring Contextual Value Alignment Between Human and LLMs
Hua Shen | Tiffany Knearem | Reshmi Ghosh | Yu-Ju Yang | Nicholas Clark | Tanu Mitra | Yun Huang
Proceedings of the 9th Widening NLP Workshop

As AI advances, aligning it with diverse human and societal values grows critical. But how do we define these values and measure AI’s adherence to them? We present ValueCompass, a framework grounded in psychological theories, to assess human-AI alignment. Applying it to five diverse LLMs and 112 humans from seven countries across four scenarios—collaborative writing, education, public sectors, and healthcare—we uncover key misalignments. For example, humans prioritize national security, while LLMs often reject it. Values also shift across contexts, demanding scenario-specific alignment strategies. This work advances AI design by mapping how systems can better reflect societal ethics.