A Simple and Efficient Learning-Style Prompting for LLM Jailbreaking

Xuan Luo, Yue Wang, Zefeng He, Geng Tu, Jing Li, Ruifeng Xu


Abstract
This study reveals a critical safety blind spot in modern LLMs: learning-style queries, which closely resemble ordinary educational questions, can reliably elicit harmful responses.The learning-style queries are constructed by a novel reframing paradigm: HILL (Hiding Intention by Learning from LLMs). The deterministic, model-agnostic reframing framework is composed of 4 conceptual components: 1) key concept, 2) exploratory transformation, 3) detail-oriented inquiry, and optionally 4) hypotheticality.Further, new metrics are introduced to thoroughly evaluate the efficiency and harmfulness of jailbreak methods.Experiments on the AdvBench dataset across a wide range of models demonstrate HILL’s strong generalizability. It achieves top attack success rates on the majority of models and across malicious categories while maintaining high efficiency with concise prompts. On the other hand, results of various defense methods show the robustness of HILL, with most defenses having mediocre effects or even increasing the attack success rates. In addition, the assessment of defenses on the constructed safe prompts reveals inherent limitations of LLMs’ safety mechanisms and flaws in the defense methods. This work exposes significant vulnerabilities of safety measures against learning-style elicitation, highlighting a critical challenge of fulfilling both helpfulness and safety alignments.
Anthology ID:
2026.findings-eacl.124
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2389–2406
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.124/
DOI:
Bibkey:
Cite (ACL):
Xuan Luo, Yue Wang, Zefeng He, Geng Tu, Jing Li, and Ruifeng Xu. 2026. A Simple and Efficient Learning-Style Prompting for LLM Jailbreaking. In Findings of the Association for Computational Linguistics: EACL 2026, pages 2389–2406, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
A Simple and Efficient Learning-Style Prompting for LLM Jailbreaking (Luo et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.124.pdf
Checklist:
 2026.findings-eacl.124.checklist.pdf