@inproceedings{zhou-etal-2026-program,
title = "Program-of-Thought Reveals {LLM} Abstraction Ceilings",
author = "Zhou, Mike and
Bardoliya, Fenil and
Gupta, Vivek and
Roth, Dan",
editor = "Demberg, Vera and
Inui, Kentaro and
Marquez, Llu{\'i}s",
booktitle = "Findings of the {A}ssociation for {C}omputational {L}inguistics: {EACL} 2026",
month = mar,
year = "2026",
address = "Rabat, Morocco",
publisher = "Association for Computational Linguistics",
url = "https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.257/",
pages = "4911--4919",
ISBN = "979-8-89176-386-9",
abstract = "Large language models (LLMs) are often claimed to exhibit reasoning ability when supervised with chain-of-thought (CoT) traces. True reasoning, however, requires invariance: isomorphic problems should yield identical solutions regardless of superficial variation. We test this property by evaluating base and reasoning-optimized models{---}including LLaMA, Mistral, Qwen, GPT-OSS, and Deepseek{---}on isomorphic variants from GSM8K and MATH. All models exhibit substantial accuracy drops under perturbation. To assess whether training can induce invariance, we fine-tune models with Program-of-Thought (PoT) supervision under concrete and masked formulations. PoT fine-tuning increases behavioral cross-variant consistency but does not significantly reduce the accuracy gap, and these gains fail to transfer across prompting formats and domains. Our central finding is that models converge toward stable but systematically incorrect behaviors: consistency without correctness. This dissociation suggests that current reasoning supervision teaches models to reproduce solution templates rather than to abstract mathematical structure."
}Markdown (Informal)
[Program-of-Thought Reveals LLM Abstraction Ceilings](https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.257/) (Zhou et al., Findings 2026)
ACL
- Mike Zhou, Fenil Bardoliya, Vivek Gupta, and Dan Roth. 2026. Program-of-Thought Reveals LLM Abstraction Ceilings. In Findings of the Association for Computational Linguistics: EACL 2026, pages 4911–4919, Rabat, Morocco. Association for Computational Linguistics.