Abstract
The distinction between arguments and adjuncts is a fundamental assumption of several linguistic theories. In this study, we investigate to what extent this distinction is picked up by a Transformer-based language model. We use BERT as a case study, operationalizing arguments and adjuncts as core and non-core FrameNet frame elements, respectively, and tying them to activations of particular BERT neurons. We present evidence, from English and Korean, that BERT learns more dedicated representations for arguments than for adjuncts when fine-tuned on the FrameNet frame-identification task. We also show that this distinction is already present in a weaker form in the vanilla pre-trained model.- Anthology ID:
- 2023.iwcs-1.23
- Volume:
- Proceedings of the 15th International Conference on Computational Semantics
- Month:
- June
- Year:
- 2023
- Address:
- Nancy, France
- Editors:
- Maxime Amblard, Ellen Breitholtz
- Venue:
- IWCS
- SIG:
- SIGSEM
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 233–239
- Language:
- URL:
- https://aclanthology.org/2023.iwcs-1.23
- DOI:
- Cite (ACL):
- Dmitry Nikolaev and Sebastian Padó. 2023. The argument–adjunct distinction in BERT: A FrameNet-based investigation. In Proceedings of the 15th International Conference on Computational Semantics, pages 233–239, Nancy, France. Association for Computational Linguistics.
- Cite (Informal):
- The argument–adjunct distinction in BERT: A FrameNet-based investigation (Nikolaev & Padó, IWCS 2023)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/2023.iwcs-1.23.pdf