Specializing Pre-trained Language Models for Better Relational Reasoning via Network Pruning

Siyu Ren, Kenny Zhu


Abstract
Pretrained masked language models (PLMs) were shown to be inheriting a considerable amount of relational knowledge from the source corpora. In this paper, we present an in-depth and comprehensive study concerning specializing PLMs into relational models from the perspective of network pruning. We show that it is possible to find subnetworks capable of representing grounded commonsense relations at non-trivial sparsity while being more generalizable than original PLMs in scenarios requiring knowledge of single or multiple commonsense relations.
Anthology ID:
2022.findings-naacl.169
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2195–2207
Language:
URL:
https://aclanthology.org/2022.findings-naacl.169
DOI:
10.18653/v1/2022.findings-naacl.169
Bibkey:
Cite (ACL):
Siyu Ren and Kenny Zhu. 2022. Specializing Pre-trained Language Models for Better Relational Reasoning via Network Pruning. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 2195–2207, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Specializing Pre-trained Language Models for Better Relational Reasoning via Network Pruning (Ren & Zhu, Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2022.findings-naacl.169.pdf
Software:
 2022.findings-naacl.169.software.zip
Code
 drsy/lamp
Data
COPACommonsenseQACosmosQALAMASWAGWSC