Infusing Finetuning with Semantic Dependencies

Zhaofeng Wu, Hao Peng, Noah A. Smith


Abstract
For natural language processing systems, two kinds of evidence support the use of text representations from neural language models “pretrained” on large unannotated corpora: performance on application-inspired benchmarks (Peters et al., 2018, inter alia), and the emergence of syntactic abstractions in those representations (Tenney et al., 2019, inter alia). On the other hand, the lack of grounded supervision calls into question how well these representations can ever capture meaning (Bender and Koller, 2020). We apply novel probes to recent language models— specifically focusing on predicate-argument structure as operationalized by semantic dependencies (Ivanova et al., 2012)—and find that, unlike syntax, semantics is not brought to the surface by today’s pretrained models. We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning, yielding benefits to natural language understanding (NLU) tasks in the GLUE benchmark. This approach demonstrates the potential for general-purpose (rather than task-specific) linguistic supervision, above and beyond conventional pretraining and finetuning. Several diagnostics help to localize the benefits of our approach.1
Anthology ID:
2021.tacl-1.14
Volume:
Transactions of the Association for Computational Linguistics, Volume 9
Month:
Year:
2021
Address:
Cambridge, MA
Editors:
Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
226–242
Language:
URL:
https://aclanthology.org/2021.tacl-1.14
DOI:
10.1162/tacl_a_00363
Bibkey:
Cite (ACL):
Zhaofeng Wu, Hao Peng, and Noah A. Smith. 2021. Infusing Finetuning with Semantic Dependencies. Transactions of the Association for Computational Linguistics, 9:226–242.
Cite (Informal):
Infusing Finetuning with Semantic Dependencies (Wu et al., TACL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2021.tacl-1.14.pdf
Video:
 https://preview.aclanthology.org/naacl-24-ws-corrections/2021.tacl-1.14.mp4