Ryan Peters


2026

Leveraging a dataset of paired narratives, we investigate the extent to which large language models (LLMs) can reliably separate incoherent and coherent stories.A probing study finds that LLMs’ internal representations can reliably identify incoherent events in narratives. However, this separation disappears by the narrative’s end, and weakens when the differences between coherent and incoherent stories are more subtle. When asked to rate overall coherence of narratives after reading, LLMs generate responses that fail to satisfactorily separate the coherent and incoherent narratives.Reasoning models tested do not eliminate these deficits, indicating that thought strings may not be able to fully address the discrepancy between model internal state and behavior.Additionally, we find that LLMs appear to be more sensitive to incoherence resulting from an event that violates the setting (e.g., a rainy day in the desert) than to incoherence arising from a character violating an established trait (e.g., Mary, a vegetarian, later orders a cheeseburger), suggesting that LLMs may rely more on prototypical world knowledge than building coherence through a meaning-based world model of the narrative setting. Together, our results indicate that LLMs lack robustness in their ability to recognize incoherence in narratives.