Abstract
We experiment with XLM RoBERTa for Word in Context Disambiguation in the Multi Lingual and Cross Lingual setting so as to develop a single model having knowledge about both settings. We solve the problem as a binary classification problem and also experiment with data augmentation and adversarial training techniques. In addition, we also experiment with a 2-stage training technique. Our approaches prove to be beneficial for better performance and robustness.- Anthology ID:
- 2021.semeval-1.98
- Volume:
- Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Editors:
- Alexis Palmer, Nathan Schneider, Natalie Schluter, Guy Emerson, Aurelie Herbelot, Xiaodan Zhu
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 743–747
- Language:
- URL:
- https://aclanthology.org/2021.semeval-1.98
- DOI:
- 10.18653/v1/2021.semeval-1.98
- Cite (ACL):
- Harsh Goyal, Aadarsh Singh, and Priyanshu Kumar. 2021. PAW at SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation : Exploring Cross Lingual Transfer, Augmentations and Adversarial Training. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 743–747, Online. Association for Computational Linguistics.
- Cite (Informal):
- PAW at SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation : Exploring Cross Lingual Transfer, Augmentations and Adversarial Training (Goyal et al., SemEval 2021)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2021.semeval-1.98.pdf
- Data
- WiC