Abstract
Even though SRL is researched for many languages, major improvements have mostly been obtained for English, for which more resources are available. In fact, existing multilingual SRL datasets contain disparate annotation styles or come from different domains, hampering generalization in multilingual learning. In this work we propose a method to automatically construct an SRL corpus that is parallel in four languages: English, French, German, Spanish, with unified predicate and role annotations that are fully comparable across languages. We apply high-quality machine translation to the English CoNLL-09 dataset and use multilingual BERT to project its high-quality annotations to the target languages. We include human-validated test sets that we use to measure the projection quality, and show that projection is denser and more precise than a strong baseline. Finally, we train different SOTA models on our novel corpus for mono- and multilingual SRL, showing that the multilingual annotations improve performance especially for the weaker languages.- Anthology ID:
- 2020.emnlp-main.321
- Volume:
- Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3904–3914
- Language:
- URL:
- https://aclanthology.org/2020.emnlp-main.321
- DOI:
- 10.18653/v1/2020.emnlp-main.321
- Cite (ACL):
- Angel Daza and Anette Frank. 2020. X-SRL: A Parallel Cross-Lingual Semantic Role Labeling Dataset. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3904–3914, Online. Association for Computational Linguistics.
- Cite (Informal):
- X-SRL: A Parallel Cross-Lingual Semantic Role Labeling Dataset (Daza & Frank, EMNLP 2020)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2020.emnlp-main.321.pdf
- Code
- Heidelberg-NLP/xsrl_mbert_aligner
- Data
- X-SRL