Spa: On the Sparsity of Virtual Adversarial Training for Dependency Parsing

Chao Lou, Wenjuan Han, Kewei Tu


Abstract
Virtual adversarial training (VAT) is a powerful approach to improving robustness and performance, leveraging both labeled and unlabeled data to compensate for the scarcity of labeled data. It is adopted on lots of vision and language classification tasks. However, for tasks with structured output (e.g., dependency parsing), the application of VAT is nontrivial due to the intrinsic proprieties of structures: (1) the non-sparse problem and (2) exponential complexity. Against this background, we propose the Sparse Parse Adjustment (spa) algorithm and successfully applied VAT to the dependency parsing task. spa refers to the learning algorithm which combines the graph-based dependency parsing model with VAT in an exact computational manner and enhances the dependency parser with controllable and adjustable sparsity. Empirical studies show that the TreeCRF parser optimized using outperforms other methods without sparsity regularization.
Anthology ID:
2022.findings-aacl.2
Volume:
Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022
Month:
November
Year:
2022
Address:
Online only
Editors:
Yulan He, Heng Ji, Sujian Li, Yang Liu, Chua-Hui Chang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11–21
Language:
URL:
https://aclanthology.org/2022.findings-aacl.2
DOI:
Bibkey:
Cite (ACL):
Chao Lou, Wenjuan Han, and Kewei Tu. 2022. Spa: On the Sparsity of Virtual Adversarial Training for Dependency Parsing. In Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, pages 11–21, Online only. Association for Computational Linguistics.
Cite (Informal):
Spa: On the Sparsity of Virtual Adversarial Training for Dependency Parsing (Lou et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2022.findings-aacl.2.pdf
Software:
 2022.findings-aacl.2.Software.zip