Idola Tribus of AI: Large Language Models tend to perceive order where none exists
Shin-nosuke Ishikawa, Masato Todo, Taiki Ogihara, Hirotsugu Ohba
Abstract
We present a tendency of large language models (LLMs) to generate absurd patterns despite their clear inappropriateness in a simple task of identifying regularities in number series. Several approaches have been proposed to apply LLMs to complex real-world tasks, such as providing knowledge through retrieval-augmented generation and executing multi-step tasks using AI agent frameworks. However, these approaches rely on the logical consistency and self-coherence of LLMs, making it crucial to evaluate these aspects and consider potential countermeasures. To identify cases where LLMs fail to maintain logical consistency, we conducted an experiment in which LLMs were asked to explain the patterns in various integer sequences, ranging from arithmetic sequences to randomly generated integer series. While the models successfully identified correct patterns in arithmetic and geometric sequences, they frequently over-recognized patterns that were inconsistent with the given numbers when analyzing randomly generated series. This issue was observed even in multi-step reasoning models, including OpenAI o3, o4-mini, and Google Gemini 2.5 Flash Preview Thinking. This tendency to perceive non-existent patterns can be interpreted as the AI model equivalent of Idola Tribus and highlights potential limitations in their capability for applied tasks requiring logical reasoning, even when employing chain-of-thought reasoning mechanisms.- Anthology ID:
- 2025.findings-emnlp.681
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2025
- Month:
- November
- Year:
- 2025
- Address:
- Suzhou, China
- Editors:
- Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 12714–12727
- Language:
- URL:
- https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.681/
- DOI:
- 10.18653/v1/2025.findings-emnlp.681
- Cite (ACL):
- Shin-nosuke Ishikawa, Masato Todo, Taiki Ogihara, and Hirotsugu Ohba. 2025. Idola Tribus of AI: Large Language Models tend to perceive order where none exists. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 12714–12727, Suzhou, China. Association for Computational Linguistics.
- Cite (Informal):
- Idola Tribus of AI: Large Language Models tend to perceive order where none exists (Ishikawa et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.681.pdf