Systematic Generalization by Finetuning? Analyzing Pretrained Language Models Using Constituency Tests

Aishik Chakraborty, Jackie CK Cheung, Timothy J. O’Donnell


Abstract
Constituents are groups of words that behave as a syntactic unit. Many linguistic phenomena (e.g., question formation, diathesis alternations) require the manipulation and rearrangement of constituents in a sentence. In this paper, we investigate how different finetuning setups affect the ability of pretrained sequence-to-sequence language models such as BART and T5 to replicate constituency tests — transformations that involve manipulating constituents in a sentence. We design multiple evaluation settings by varying the combinations of constituency tests and sentence types that a model is exposed to during finetuning. We show that models can replicate a linguistic transformation on a specific type of sentence that they saw during finetuning, but performance degrades substantially in other settings, showing a lack of systematic generalization. These results suggest that models often learn to manipulate sentences at a surface level unrelated to the constituent-level syntactic structure, for example by copying the first word of a sentence. These results may partially explain the brittleness of pretrained language models in downstream tasks.
Anthology ID:
2023.blackboxnlp-1.27
Volume:
Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Month:
December
Year:
2023
Address:
Singapore
Editors:
Yonatan Belinkov, Sophie Hao, Jaap Jumelet, Najoung Kim, Arya McCarthy, Hosein Mohebbi
Venues:
BlackboxNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
357–366
Language:
URL:
https://aclanthology.org/2023.blackboxnlp-1.27
DOI:
10.18653/v1/2023.blackboxnlp-1.27
Bibkey:
Cite (ACL):
Aishik Chakraborty, Jackie CK Cheung, and Timothy J. O’Donnell. 2023. Systematic Generalization by Finetuning? Analyzing Pretrained Language Models Using Constituency Tests. In Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 357–366, Singapore. Association for Computational Linguistics.
Cite (Informal):
Systematic Generalization by Finetuning? Analyzing Pretrained Language Models Using Constituency Tests (Chakraborty et al., BlackboxNLP-WS 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2023.blackboxnlp-1.27.pdf
Video:
 https://preview.aclanthology.org/improve-issue-templates/2023.blackboxnlp-1.27.mp4