Sayna Ebrahimi
2024
TextGenSHAP: Scalable Post-Hoc Explanations in Text Generation with Long Documents
James Enouen
|
Hootan Nakhost
|
Sayna Ebrahimi
|
Sercan Arik
|
Yan Liu
|
Tomas Pfister
Findings of the Association for Computational Linguistics: ACL 2024
Large language models (LLMs) have attracted great interest in many real-world applications; however, their “black-box” nature necessitates scalable and faithful explanations. Shapley values have matured as an explainability method for deep learning, but extending them to LLMs is difficult due to long input contexts and autoregressive output generation. We introduce , an efficient post-hoc explanation method incorporating LLM-specific techniques, which leads to significant runtime improvements: token-level explanations in minutes not hours, and document-level explanations within seconds. We demonstrate how such explanations can improve end-to-end performance of retrieval augmented generation by localizing important words within long documents and reranking passages collected by retrieval systems. On various open-domain question answering benchmarks, we show TextGenSHAP improves the retrieval recall and prediction accuracy significantly.
2023
Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs
Jiefeng Chen
|
Jinsung Yoon
|
Sayna Ebrahimi
|
Sercan Arik
|
Tomas Pfister
|
Somesh Jha
Findings of the Association for Computational Linguistics: EMNLP 2023
Large language models (LLMs) have recently shown great advances in a variety of tasks, including natural language understanding and generation. However, their use in high-stakes decision-making scenarios is still limited due to the potential for errors. *Selective prediction* is a technique that can be used to improve the reliability of the LLMs by allowing them to abstain from making predictions when they are unsure of the answer. In this work, we propose a novel framework for adaptation with self-evaluation to improve the selective prediction performance of LLMs. Our framework is based on the idea of using parameter-efficient tuning to adapt the LLM to the specific task at hand while improving its ability to perform self-evaluation. We evaluate our method on a variety of question-answering (QA) datasets and show that it outperforms state-of-the-art selective prediction methods. For example, on the CoQA benchmark, our method improves the AUACC from 91.23% to 92.63% and improves the AUROC from 74.61% to 80.25%.
Search
Co-authors
- Sercan Arik 2
- Tomas Pfister 2
- Jiefeng Chen 1
- Jinsung Yoon 1
- Somesh Jha 1
- show all...