This paper has been retracted. Paper was already published elsewhere and the authors want to withdraw the paper.
Abstract
There have been efforts in cross-lingual transfer learning for various tasks. We present an approach utilizing an interpolative data augmentation method, Mixup, to improve the generalizability of models for part-of-speech tagging trained on a source language, improving its performance on unseen target languages. Through experiments on ten languages with diverse structures and language roots, we put forward its applicability for downstream zero-shot cross-lingual tasks.- Anthology ID:
- 2021.mrl-1.22
- Original:
- 2021.mrl-1.22v1
- Version 2:
- 2021.mrl-1.22v2
- Volume:
- Proceedings of the 1st Workshop on Multilingual Representation Learning
- Month:
- November
- Year:
- 2021
- Address:
- Punta Cana, Dominican Republic
- Venue:
- MRL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 245–247
- Language:
- URL:
- https://aclanthology.org/2021.mrl-1.22
- DOI:
- 10.18653/v1/2021.mrl-1.22
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/2021.mrl-1.22.pdf