Razieh Rahimi
2023
Conditional Natural Language Inference
Youngwoo Kim
|
Razieh Rahimi
|
James Allan
Findings of the Association for Computational Linguistics: EMNLP 2023
To properly explain sentence pairs that provide contradictory (different) information for different conditions, we introduce the task of conditional natural language inference (Cond-NLI) and focus on automatically extracting contradictory aspects and their conditions from a sentence pair. Cond-NLI can help to provide a full spectrum of information, such as when there are multiple answers to a question each addressing a specific condition, or reviews with different opinions for different conditions. We show that widely-used feature-attribution explanation models are not suitable for finding conditions, especially when sentences are long and are written independently. We propose a simple yet effective model for the original NLI task that can successfully extract conditions while not requiring token-level annotations. Our model enhances the interpretability of the NLI task while maintaining comparable accuracy. To evaluate models for the Cond-NLI, we build and release a token-level annotated dataset BioClaim which contains potentially contradictory claims from the biomedical domain. Our experiments show that our proposed model outperforms the full cross-encoder and other baselines in extracting conditions. It also performs on-par with GPT-3 which has an order of magnitude more parameters and trained on a huge amount of data.
PaRaDe: Passage Ranking using Demonstrations with LLMs
Andrew Drozdov
|
Honglei Zhuang
|
Zhuyun Dai
|
Zhen Qin
|
Razieh Rahimi
|
Xuanhui Wang
|
Dana Alon
|
Mohit Iyyer
|
Andrew McCallum
|
Donald Metzler
|
Kai Hui
Findings of the Association for Computational Linguistics: EMNLP 2023
Recent studies show that large language models (LLMs) can be instructed to effectively perform zero-shot passage re-ranking, in which the results of a first stage retrieval method, such as BM25, are rated and reordered to improve relevance. In this work, we improve LLM-based re-ranking by algorithmically selecting few-shot demonstrations to include in the prompt. Our analysis investigates the conditions where demonstrations are most helpful, and shows that adding even one demonstration is significantly beneficial. We propose a novel demonstration selection strategy based on difficulty rather than the commonly used semantic similarity. Furthermore, we find that demonstrations helpful for ranking are also effective at question generation. We hope our work will spur more principled research into question generation and passage ranking.
2022
You can’t pick your neighbors, or can you? When and How to Rely on Retrieval in the kNN-LM
Andrew Drozdov
|
Shufan Wang
|
Razieh Rahimi
|
Andrew McCallum
|
Hamed Zamani
|
Mohit Iyyer
Findings of the Association for Computational Linguistics: EMNLP 2022
Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs. One such approach, the kNN-LM, interpolates any existing LM’s predictions with the output of a k-nearest neighbors model and requires no additional training. In this paper, we explore the importance of lexical and semantic matching in the context of items retrieved by kNN-LM. We find two trends: (1) the presence of large overlapping n-grams between the datastore and evaluation set plays an important factor in strong performance, even when the datastore is derived from the training data; and (2) the kNN-LM is most beneficial when retrieved items have high semantic similarity with the query. Based on our analysis, we define a new formulation of the kNN-LM that uses retrieval quality to assign the interpolation coefficient. We empirically measure the effectiveness of our approach on two English language modeling datasets, Wikitext-103 and PG-19. Our re-formulation of the kNN-LM is beneficial in both cases, and leads to nearly 4% improvement in perplexity on the Wikitext-103 test set.
Search
Co-authors
- Andrew Drozdov 2
- Mohit Iyyer 2
- Andrew Mccallum 2
- Youngwoo Kim 1
- James Allan 1
- show all...