Abstract
Many efforts of research are devoted to semantic role labeling (SRL) which is crucial for natural language understanding. Supervised approaches have achieved impressing performances when large-scale corpora are available for resource-rich languages such as English. While for the low-resource languages with no annotated SRL dataset, it is still challenging to obtain competitive performances. Cross-lingual SRL is one promising way to address the problem, which has achieved great advances with the help of model transferring and annotation projection. In this paper, we propose a novel alternative based on corpus translation, constructing high-quality training datasets for the target languages from the source gold-standard SRL annotations. Experimental results on Universal Proposition Bank show that the translation-based method is highly effective, and the automatic pseudo datasets can improve the target-language SRL performances significantly.- Anthology ID:
- 2020.acl-main.627
- Volume:
- Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7014–7026
- Language:
- URL:
- https://aclanthology.org/2020.acl-main.627
- DOI:
- 10.18653/v1/2020.acl-main.627
- Cite (ACL):
- Hao Fei, Meishan Zhang, and Donghong Ji. 2020. Cross-Lingual Semantic Role Labeling with High-Quality Translated Training Corpus. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7014–7026, Online. Association for Computational Linguistics.
- Cite (Informal):
- Cross-Lingual Semantic Role Labeling with High-Quality Translated Training Corpus (Fei et al., ACL 2020)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2020.acl-main.627.pdf
- Code
- scofield7419/XSRL-ACL