Marc E. Canby
2025
Benchmarking Query-Conditioned Natural Language Inference
Marc E. Canby
|
Xinchi Chen
|
Xing Niu
|
Jifan Chen
|
Bonan Min
|
Sergul Aydore
|
Vittorio Castelli
Findings of the Association for Computational Linguistics: ACL 2025
The growing excitement around the ability of large language models (LLMs) to tackle various tasks has been tempered by their propensity for generating unsubstantiated information (hallucination) and by their inability to effectively handle inconsistent inputs. To detect such issues, we propose the novel task of Query-Conditioned Natural Language Inference (QC-NLI), where the goal is to determine the semantic relationship (e.g. entailment or not entailment) between two documents conditioned on a query; we demonstrate that many common tasks regarding inconsistency detection can be formulated as QC-NLI problems. We focus on three applications in particular: fact verification, intrinsic hallucination detection, and document inconsistency detection. We convert existing datasets for these tasks into the QC-NLI format, and manual annotation confirms their high quality. Finally, we employ zero- and few-shot prompting methods to solve the QC-NLI prediction problem for each task, showing the critical importance of conditioning on the query.
How Reliable are Causal Probing Interventions?
Marc E. Canby
|
Adam Davies
|
Chirag Rastogi
|
Julia Hockenmaier
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Causal probing aims to analyze foundation models by examining how intervening on their representation of various latent properties impacts their outputs. Recent works have cast doubt on the theoretical basis of several leading causal probing methods, but it has been unclear how to systematically evaluate the effectiveness of these methods in practice. To address this, we define two key causal probing desiderata: *completeness* (how thoroughly the representation of the target property has been transformed) and *selectivity* (how little non-targeted properties have been impacted). We find that there is an inherent tradeoff between the two, which we define as *reliability*, their harmonic mean. We introduce an empirical analysis framework to measure and evaluate these quantities, allowing us to make the first direct comparisons between different families of leading causal probing methods (e.g., linear vs. nonlinear, or concept removal vs. counterfactual interventions). We find that: (1) all methods show a clear tradeoff between completeness and selectivity; (2) more complete and reliable methods have a greater impact on LLM behavior; and (3) nonlinear interventions are almost always more reliable than linear interventions.Our project webpage is available at: [https://ahdavies6.github.io/causal_probing_reliability/](https://ahdavies6.github.io/causal_probing_reliability/)
Search
Fix author
Co-authors
- Sergul Aydore 1
- Vittorio Castelli 1
- Xinchi Chen 1
- Jifan Chen 1
- Adam Davies 1
- show all...