NegNLI-BR: A Brazilian Portuguese Benchmark for Negation in Natural Language Inference

Matheus Westhelle, Viviane Moreira


Abstract
Recent studies have questioned the ability of Large Language Models (LLMs) to handle logical negation. We revisit this issue within the Natural Language Inference (NLI) task, specifically investigating whether modern LLMs can distinguish negations that alter logical entailment (“important”) from those that do not (“unimportant”). For this purpose, we introduce NegNLI-BR, a new benchmark dataset in Portuguese designed to exercise this distinction. We evaluate a range of recent open-source LLMs, comparing the performance of their base and post-trained versions. Furthermore, we employ a causal probe to measure the Average Treatment Effect of negation interventions on the internal representations of LLMs. Our findings show that many recent LLMs, including smaller variants, effectively handle negation. The causal analysis reveals that important negations induce a stable and significant effect on model representations, distinct from unimportant negations or neutral filler words. We also observe that post-training generally enhances this representational sensitivity, suggesting it refines the models’ ability to encode the logical impact of negation.
Anthology ID:
2026.lrec-main.97
Volume:
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Month:
May
Year:
2026
Address:
Palma de Mallorca, Spain
Editors:
Stelios Piperidis, Núria Bel, Henk van den Heuvel, Nancy Ide, Simon Krek, Antonio Toral
Venue:
LREC
SIG:
Publisher:
ELRA Language Resource Association
Note:
Pages:
1226–1235
Language:
URL:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.97/
DOI:
Bibkey:
Cite (ACL):
Matheus Westhelle and Viviane Moreira. 2026. NegNLI-BR: A Brazilian Portuguese Benchmark for Negation in Natural Language Inference. International Conference on Language Resources and Evaluation, main:1226–1235.
Cite (Informal):
NegNLI-BR: A Brazilian Portuguese Benchmark for Negation in Natural Language Inference (Westhelle & Moreira, LREC 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.97.pdf