Sequence Labeling Parsing by Learning across Representations

Michalina Strzyz, David Vilares, Carlos Gómez-Rodríguez


Abstract
We use parsing as sequence labeling as a common framework to learn across constituency and dependency syntactic abstractions.To do so, we cast the problem as multitask learning (MTL). First, we show that adding a parsing paradigm as an auxiliary loss consistently improves the performance on the other paradigm. Secondly, we explore an MTL sequence labeling model that parses both representations, at almost no cost in terms of performance and speed. The results across the board show that on average MTL models with auxiliary losses for constituency parsing outperform single-task ones by 1.05 F1 points, and for dependency parsing by 0.62 UAS points.
Anthology ID:
P19-1531
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5350–5357
Language:
URL:
https://aclanthology.org/P19-1531
DOI:
10.18653/v1/P19-1531
Bibkey:
Cite (ACL):
Michalina Strzyz, David Vilares, and Carlos Gómez-Rodríguez. 2019. Sequence Labeling Parsing by Learning across Representations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5350–5357, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Sequence Labeling Parsing by Learning across Representations (Strzyz et al., ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/P19-1531.pdf
Code
 mstrise/seq2label-crossrep
Data
Penn Treebank