Quantifying training challenges of dependency parsers

Lauriane Aufrant, Guillaume Wisniewski, François Yvon


Abstract
Not all dependencies are equal when training a dependency parser: some are straightforward enough to be learned with only a sample of data, others embed more complexity. This work introduces a series of metrics to quantify those differences, and thereby to expose the shortcomings of various parsing algorithms and strategies. Apart from a more thorough comparison of parsing systems, these new tools also prove useful for characterizing the information conveyed by cross-lingual parsers, in a quantitative but still interpretable way.
Anthology ID:
C18-1270
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3191–3202
Language:
URL:
https://aclanthology.org/C18-1270
DOI:
Bibkey:
Cite (ACL):
Lauriane Aufrant, Guillaume Wisniewski, and François Yvon. 2018. Quantifying training challenges of dependency parsers. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3191–3202, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Quantifying training challenges of dependency parsers (Aufrant et al., COLING 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/C18-1270.pdf