Mansi Gupta
2020
A Data-Centric Framework for Composable NLP Workflows
Zhengzhong Liu
|
Guanxiong Ding
|
Avinash Bukkittu
|
Mansi Gupta
|
Pengzhi Gao
|
Atif Ahmed
|
Shikun Zhang
|
Xin Gao
|
Swapnil Singhavi
|
Linwei Li
|
Wei Wei
|
Zecong Hu
|
Haoran Shi
|
Xiaodan Liang
|
Teruko Mitamura
|
Eric Xing
|
Zhiting Hu
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Empirical natural language processing (NLP) systems in application domains (e.g., healthcare, finance, education) involve interoperation among multiple components, ranging from data ingestion, human annotation, to text retrieval, analysis, generation, and visualization. We establish a unified open-source framework to support fast development of such sophisticated NLP workflows in a composable manner. The framework introduces a uniform data representation to encode heterogeneous results by a wide range of NLP tasks. It offers a large repository of processors for NLP tasks, visualization, and annotation, which can be easily assembled with full interoperability under the unified representation. The highly extensible framework allows plugging in custom processors from external off-the-shelf NLP and deep learning libraries. The whole framework is delivered through two modularized yet integratable open-source projects, namely Forte (for workflow infrastructure and NLP function processors) and Stave (for user interaction, visualization, and annotation).
Learning to Deceive with Attention-Based Explanations
Danish Pruthi
|
Mansi Gupta
|
Bhuwan Dhingra
|
Graham Neubig
|
Zachary C. Lipton
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Attention mechanisms are ubiquitous components in neural architectures applied to natural language processing. In addition to yielding gains in predictive accuracy, attention weights are often claimed to confer interpretability, purportedly useful both for providing insights to practitioners and for explaining why a model makes its decisions to stakeholders. We call the latter use of attention mechanisms into question by demonstrating a simple method for training models to produce deceptive attention masks. Our method diminishes the total weight assigned to designated impermissible tokens, even when the models can be shown to nevertheless rely on these features to drive predictions. Across multiple models and tasks, our approach manipulates attention weights while paying surprisingly little cost in accuracy. Through a human study, we show that our manipulated attention-based explanations deceive people into thinking that predictions from a model biased against gender minorities do not rely on the gender. Consequently, our results cast doubt on attention’s reliability as a tool for auditing algorithms in the context of fairness and accountability.
Search
Co-authors
- Zhengzhong Liu 1
- Guanxiong Ding 1
- Avinash Bukkittu 1
- Pengzhi Gao 1
- Atif Ahmed 1
- show all...