We Are What We Repeatedly Do: Improving Long Context Instruction Following

Preston K Robinette, Andrew Hard, Swaroop Ramaswamy, Ehsan Amid, Rajiv Mathews, Taylor T Johnson


Abstract
Large language model context lengths have grown rapidly in recent years, from 512 tokens in GPT to 2M tokens in Gemini 1.5 Pro. Larger context windows enable models to condition on significantly more input tokens, leading to higher quality responses for some user prompts. However, longer contexts also pose challenges to system instruction adherence. In this work, we formalize verifiable instructions to evaluate model *compliance* based on clear, measurable criteria. From this criteria, we present **VerIFY**, a **Ver**ifiable **I**nstruction **F**ollowing **Y**ardstick dataset designed to benchmark the compliance and accuracy of LLMs in adhering to various types of instructions across multi-turn, long-context conversations. From experiments with open-source models, we reveal insights into instruction-following failures in long contexts, helping to improve the reliability, safety, and precision of these models. Furthermore, we implement and evaluate six mitigation strategies to enhance instruction compliance in extended contexts, achieving an improvement up to 79%. This is the first work to consider instruction following for multi-turn, long context conversations.
Anthology ID:
2026.findings-eacl.254
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4855–4884
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.254/
DOI:
Bibkey:
Cite (ACL):
Preston K Robinette, Andrew Hard, Swaroop Ramaswamy, Ehsan Amid, Rajiv Mathews, and Taylor T Johnson. 2026. We Are What We Repeatedly Do: Improving Long Context Instruction Following. In Findings of the Association for Computational Linguistics: EACL 2026, pages 4855–4884, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
We Are What We Repeatedly Do: Improving Long Context Instruction Following (Robinette et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.254.pdf
Checklist:
 2026.findings-eacl.254.checklist.pdf