What to Learn, and How: Toward Effective Learning from Rationales

Samuel Carton, Surya Kanoria, Chenhao Tan


Abstract
Learning from rationales seeks to augment model prediction accuracy using human-annotated rationales (i.e. subsets of input tokens) that justify their chosen labels, often in the form of intermediate or multitask supervision. While intuitive, this idea has proven elusive in practice. We make two observations about human rationales via empirical analyses:1) maximizing rationale supervision accuracy is not necessarily the optimal objective for improving model accuracy; 2) human rationales vary in whether they provide sufficient information for the model to exploit for prediction. Building on these insights, we propose several novel loss functions and learning strategies, and evaluate their effectiveness on three datasets with human rationales. Our results demonstrate consistent improvements over baselines in both label and rationale accuracy, including a 3% accuracy improvement on MultiRC. Our work highlights the importance of understanding properties of human explanations and exploiting them accordingly in model training.
Anthology ID:
2022.findings-acl.86
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1075–1088
Language:
URL:
https://aclanthology.org/2022.findings-acl.86
DOI:
10.18653/v1/2022.findings-acl.86
Bibkey:
Cite (ACL):
Samuel Carton, Surya Kanoria, and Chenhao Tan. 2022. What to Learn, and How: Toward Effective Learning from Rationales. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1075–1088, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
What to Learn, and How: Toward Effective Learning from Rationales (Carton et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2022.findings-acl.86.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-2/2022.findings-acl.86.mp4
Code
 chicagohai/learning-from-rationales
Data
FEVERMultiRCe-SNLI