Structured Outputs in Prompt Engineering: Enhancing LLM Adaptability on Counterintuitive Instructions

Jingjing Ye, Song Bai, Zhenyang Li, Zheqi Zone


Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language processing tasks, yet they often exhibit cognitive inertia, rigidly adhering to ingrained training conventions even when prompted to deviate. This paper investigates the efficacy of structured output techniques in prompt engineering to mitigate such inertia and improve instruction-following on counterintuitive tasks. We argue that using the structured input and output with our framework yields significant performance gains, studied on the Inversed IFEval dataset across varying prompts and domains. This work contributes to the growing field of prompt engineering research by demonstrating structured outputs as a robust method for enhancing LLM logical reasoning.
Anthology ID:
2025.wasp-main.13
Volume:
Proceedings of the Third Workshop for Artificial Intelligence for Scientific Publications
Month:
December
Year:
2025
Address:
Mumbai, India and virtual
Editors:
Alberto Accomazzi, Tirthankar Ghosal, Felix Grezes, Kelly Lockhart
Venues:
WASP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
115–120
Language:
URL:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.wasp-main.13/
DOI:
Bibkey:
Cite (ACL):
Jingjing Ye, Song Bai, Zhenyang Li, and Zheqi Zone. 2025. Structured Outputs in Prompt Engineering: Enhancing LLM Adaptability on Counterintuitive Instructions. In Proceedings of the Third Workshop for Artificial Intelligence for Scientific Publications, pages 115–120, Mumbai, India and virtual. Association for Computational Linguistics.
Cite (Informal):
Structured Outputs in Prompt Engineering: Enhancing LLM Adaptability on Counterintuitive Instructions (Ye et al., WASP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.wasp-main.13.pdf