Nutchanon Yongsatianchot


2025

pdf bib
Investigating Motivated Inference in Large Language Models
Nutchanon Yongsatianchot | Stacy Marsella
Proceedings of the 9th Widening NLP Workshop

Our desires often influence our beliefs and expectations. Humans tend to think good things are more likely to happen than they actually are, while believing bad things are less likely. This tendency has been referred to as wishful thinking in research on coping strategies. With large language models (LLMs) increasingly being considered as computational models of human cognition, we investigate whether they can simulate this distinctly human bias. We conducted two systematic experiments across multiple LLMs, manipulating outcome desirability and information uncertainty across multiple scenarios including probability games, natural disasters, and sports events. Our experiments revealed limited wishful thinking in LLMs. In Experiment 1, only two models showed the bias, and only in sports-related scenarios when role-playing characters. Models exhibited no wishful thinking in mathematical contexts. Experiment 2 found that explicit prompting about emotional states (being hopeful) was necessary to elicit wishful thinking in logical domains. These findings reveal a significant gap between human cognitive biases and LLMs’ default behavior patterns, suggesting that current models require explicit guidance to simulate wishful thinking influences on belief formation.

pdf bib
Brown Like Chocolate: How Vision-Language Models Associate Skin Tone with Food Colors
Nutchanon Yongsatianchot | Pachaya Sailamul
Proceedings of the 9th Widening NLP Workshop

We investigate how Vision-Language Models (VLMs) leverage visual features when making analogical comparisons about people. Using synthetic images of individuals varying in skin tone and nationality, we prompt GPT and Gemini models to make analogical associations with desserts and drinks. Results reveal that VLMs systematically associate darker-skinned individuals with brown-colored food items, with GPT showing stronger associations than Gemini. These patterns are amplified in Thai versus English prompts, suggesting language-dependent encoding of visual stereotypes. The associations persist across manipulation checks including position swapping and clothing changes, though presenting individuals alone yields divergent language-specific patterns. This work reveals concerning associations in VLMs’ visual reasoning that vary by language, with important implications for multilingual deployment.