Patrick Y. Wu
2025
Measuring scalar constructs in social science with LLMs
Hauke Licht
|
Rupak Sarkar
|
Patrick Y. Wu
|
Pranav Goel
|
Niklas Stoehr
|
Elliott Ash
|
Alexander Miserlis Hoyle
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Many constructs that characterize language, like its complexity or emotionality, have a naturally continuous semantic structure; a public speech is not just “simple” or “complex”, but exists on a continuum between extremes. Although large language models (LLMs) are an attractive tool for measuring scalar constructs, their idiosyncratic treatment of numerical outputs raises questions of how to best apply them. We address these questions with a comprehensive evaluation of LLM-based approaches to scalar construct measurement in social science. Using multiple datasets sourced from the political science literature, we evaluate four approaches: unweighted direct pointwise scoring, aggregation of pairwise comparisons, token-probability-weighted pointwise scoring, and finetuning. Our study finds that pairwise comparisons made by LLMs produce better measurements than simply prompting the LLM to directly output the scores, which suffers from bunching around arbitrary numbers. However, taking the weighted mean over the token probability of scores further improves the measurements over the two previous approaches. Finally, finetuning smaller models with as few as 1,000 training pairs can match or exceed the performance of prompted LLMs.
PairScale: Analyzing Attitude Change with Pairwise Comparisons
Rupak Sarkar
|
Patrick Y. Wu
|
Kristina Miler
|
Alexander Miserlis Hoyle
|
Philip Resnik
Findings of the Association for Computational Linguistics: NAACL 2025
We introduce a text-based framework for measuring attitudes in communities toward issues of interest, going beyond the pro/con/neutral of conventional stance detection to characterize attitudes on a continuous scale using both implicit and explicit evidence in language. The framework exploits LLMs both to extract attitude-related evidence and to perform pairwise comparisons that yield unidimensional attitude scores via the classic Bradley-Terry model. We validate the LLM-based steps using human judgments, and illustrate the utility of the approach for social science by examining the evolution of attitudes on two high-profile issues in U.S. politics in two political communities on Reddit over the period spanning from the 2016 presidential campaign to the 2022 mid-term elections. WARNING: Potentially sensitive political content.
2022
Dictionary-Assisted Supervised Contrastive Learning
Patrick Y. Wu
|
Richard Bonneau
|
Joshua A. Tucker
|
Jonathan Nagler
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Text analysis in the social sciences often involves using specialized dictionaries to reason with abstract concepts, such as perceptions about the economy or abuse on social media. These dictionaries allow researchers to impart domain knowledge and note subtle usages of words relating to a concept(s) of interest. We introduce the dictionary-assisted supervised contrastive learning (DASCL) objective, allowing researchers to leverage specialized dictionaries when fine-tuning pretrained language models. The text is first keyword simplified: a common, fixed token replaces any word in the corpus that appears in the dictionary(ies) relevant to the concept of interest. During fine-tuning, a supervised contrastive objective draws closer the embeddings of the original and keyword-simplified texts of the same class while pushing further apart the embeddings of different classes. The keyword-simplified texts of the same class are more textually similar than their original text counterparts, which additionally draws the embeddings of the same class closer together. Combining DASCL and cross-entropy improves classification performance metrics in few-shot learning settings and social science applications compared to using cross-entropy alone and alternative contrastive and data augmentation methods.
Search
Fix author
Co-authors
- Alexander Miserlis Hoyle 2
- Rupak Sarkar 2
- Elliott Ash 1
- Richard Bonneau 1
- Pranav Goel 1
- show all...