UNIREX: A Unified Learning Framework for Language Model Rationale Extraction
Aaron Chan, Maziar Sanjabi, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren, Hamed Firooz
Abstract
An extractive rationale explains a language model’s (LM’s) prediction on a given task instance by highlighting the text inputs that most influenced the prediction. Ideally, rationale extraction should be faithful (reflective of LM’s actual behavior) and plausible (convincing to humans), without compromising the LM’s (i.e., task model’s) task performance. Although attribution algorithms and select-predict pipelines are commonly used in rationale extraction, they both rely on certain heuristics that hinder them from satisfying all three desiderata. In light of this, we propose UNIREX, a flexible learning framework which generalizes rationale extractor optimization as follows: (1) specify architecture for a learned rationale extractor; (2) select explainability objectives (i.e., faithfulness and plausibility criteria); and (3) jointly the train task model and rationale extractor on the task using selected objectives. UNIREX enables replacing prior works’ heuristic design choices with a generic learned rationale extractor in (1) and optimizing it for all three desiderata in (2)-(3). To facilitate comparison between methods w.r.t. multiple desiderata, we introduce the Normalized Relative Gain (NRG) metric. Across five English text classification datasets, our best UNIREX configuration outperforms the strongest baselines by an average of 32.9% NRG. Plus, we find that UNIREX-trained rationale extractors’ faithfulness can even generalize to unseen datasets and tasks.- Anthology ID:
- 2022.bigscience-1.5
- Volume:
- Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models
- Month:
- May
- Year:
- 2022
- Address:
- virtual+Dublin
- Editors:
- Angela Fan, Suzana Ilic, Thomas Wolf, Matthias Gallé
- Venue:
- BigScience
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 51–67
- Language:
- URL:
- https://aclanthology.org/2022.bigscience-1.5
- DOI:
- 10.18653/v1/2022.bigscience-1.5
- Cite (ACL):
- Aaron Chan, Maziar Sanjabi, Lambert Mathias, Liang Tan, Shaoliang Nie, Xiaochang Peng, Xiang Ren, and Hamed Firooz. 2022. UNIREX: A Unified Learning Framework for Language Model Rationale Extraction. In Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models, pages 51–67, virtual+Dublin. Association for Computational Linguistics.
- Cite (Informal):
- UNIREX: A Unified Learning Framework for Language Model Rationale Extraction (Chan et al., BigScience 2022)
- PDF:
- https://preview.aclanthology.org/naacl-24-ws-corrections/2022.bigscience-1.5.pdf
- Code
- facebookresearch/unirex
- Data
- CoS-E, Hate Speech, MultiRC, SST, e-SNLI