This is not a Disimprovement: Improving Negation Reasoning in Large Language Models via Prompt Engineering

Joshua Jose Dias Barreto, Abhik Jana


Abstract
Negation reasoning remains a challenge for large language models (LLMs), often causing incorrect interpretations of negated statements. In this study, we analyze various LLMs for their handling of negation and propose two genres of prompts (*Warning-based* and *Persona-based*), which improve overall absolute accuracy by up to 3.17% and distractor negation accuracy by up to 25.14% over most competitive baselines. Next, we assess the robustness of LLMs by reordering prompts while preserving meaning, observing instability linked to positional encoding schemes. Further, we introduce a negative token attention score (NTAS) to quantify attention to negation words. From the comprehensive analysis, we point out that within a specific LLM family, the performance of a model (measured using accuracy) correlates more with NTAS than with model size. The code is publicly available: [https://github.com/Joshua-Dias-Barreto/This-is-not-a-Disimprovement](https://github.com/Joshua-Dias-Barreto/This-is-not-a-Disimprovement)
Anthology ID:
2025.findings-emnlp.761
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14149–14156
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.761/
DOI:
10.18653/v1/2025.findings-emnlp.761
Bibkey:
Cite (ACL):
Joshua Jose Dias Barreto and Abhik Jana. 2025. This is not a Disimprovement: Improving Negation Reasoning in Large Language Models via Prompt Engineering. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 14149–14156, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
This is not a Disimprovement: Improving Negation Reasoning in Large Language Models via Prompt Engineering (Barreto & Jana, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.761.pdf
Checklist:
 2025.findings-emnlp.761.checklist.pdf