Song Bai
2025
Structured Outputs in Prompt Engineering: Enhancing LLM Adaptability on Counterintuitive Instructions
Jingjing Ye
|
Song Bai
|
Zhenyang Li
|
Zheqi Zone
Proceedings of the Third Workshop for Artificial Intelligence for Scientific Publications
Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language processing tasks, yet they often exhibit cognitive inertia, rigidly adhering to ingrained training conventions even when prompted to deviate. This paper investigates the efficacy of structured output techniques in prompt engineering to mitigate such inertia and improve instruction-following on counterintuitive tasks. We argue that using the structured input and output with our framework yields significant performance gains, studied on the Inversed IFEval dataset across varying prompts and domains. This work contributes to the growing field of prompt engineering research by demonstrating structured outputs as a robust method for enhancing LLM logical reasoning.