Right for the Right Reason: Evidence Extraction for Trustworthy Tabular Reasoning

Vivek Gupta, Shuo Zhang, Alakananda Vempala, Yujie He, Temma Choji, Vivek Srikumar


Abstract
When pre-trained contextualized embedding-based models developed for unstructured data are adapted for structured tabular data, they perform admirably. However, recent probing studies show that these models use spurious correlations, and often predict inference labels by focusing on false evidence or ignoring it altogether. To study this issue, we introduce the task of Trustworthy Tabular Reasoning, where a model needs to extract evidence to be used for reasoning, in addition to predicting the label. As a case study, we propose a two-stage sequential prediction approach, which includes an evidence extraction and an inference stage. First, we crowdsource evidence row labels and develop several unsupervised and supervised evidence extraction strategies for InfoTabS, a tabular NLI benchmark. Our evidence extraction strategy outperforms earlier baselines. On the downstream tabular inference task, using only the automatically extracted evidence as the premise, our approach outperforms prior benchmarks.
Anthology ID:
2022.acl-long.231
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3268–3283
Language:
URL:
https://aclanthology.org/2022.acl-long.231
DOI:
10.18653/v1/2022.acl-long.231
Bibkey:
Cite (ACL):
Vivek Gupta, Shuo Zhang, Alakananda Vempala, Yujie He, Temma Choji, and Vivek Srikumar. 2022. Right for the Right Reason: Evidence Extraction for Trustworthy Tabular Reasoning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3268–3283, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Right for the Right Reason: Evidence Extraction for Trustworthy Tabular Reasoning (Gupta et al., ACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.acl-long.231.pdf
Video:
 https://preview.aclanthology.org/ingestion-script-update/2022.acl-long.231.mp4
Data
TabFact