Quantifying the Impact of Structured Output Format on Large Language Models through Causal Inference

Han Yuan, Yue Zhao, Li Zhang, Wuqiong Luo, Zheng Ma


Abstract
Structured output from large language models (LLMs) has enhanced efficiency in processing generated information and is increasingly adopted in industrial applications. Prior studies have investigated the impact of structured output on LLMs’ generation quality, often presenting one-way findings. Some suggest that structured format enhances completeness and factual accuracy, while others argue that it restricts the reasoning capacity of LLMs and leads to reductions in standard evaluation metrics. Potential limitations of these assessments include restricted testing scenarios, weakly controlled comparative settings, and reliance on coarse metrics. In this work, we present a refined analysis using causal inference. Based on one assumed and two guaranteed constraints, we derive five potential causal structures characterizing the influence of structured output on LLMs’ generation: (1) collider without m-bias, (2) collider with m-bias, (3) single cause from instruction, (4) single cause from output format, and (5) independence. Across seven public and one developed reasoning tasks, we find that coarse metrics report positive, negative, or neutral effects of structured output on GPT-4o’s generation. However, causal inference reveals no causal impact in 43 out of 48 scenarios. In the remaining 5, 3 involve multifaceted causal structures influenced by concrete instructions. Further experiments show that OpenAI-o3 are more resilient to output formats than general-purpose GPT-4o and GPT-4.1, highlighting an unaware advantage of reasoning models.
Anthology ID:
2026.findings-eacl.91
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1771–1795
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.91/
DOI:
Bibkey:
Cite (ACL):
Han Yuan, Yue Zhao, Li Zhang, Wuqiong Luo, and Zheng Ma. 2026. Quantifying the Impact of Structured Output Format on Large Language Models through Causal Inference. In Findings of the Association for Computational Linguistics: EACL 2026, pages 1771–1795, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Quantifying the Impact of Structured Output Format on Large Language Models through Causal Inference (Yuan et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.91.pdf
Checklist:
 2026.findings-eacl.91.checklist.pdf