Abstract
In this paper, we advocate for using large pre-trained monolingual language models in cross lingual zero-shot word sense disambiguation (WSD) coupled with a contextualized mapping mechanism. We also report rigorous experiments that illustrate the effectiveness of employing sparse contextualized word representations obtained via a dictionary learning procedure. Our experimental results demonstrate that the above modifications yield a significant improvement of nearly 6.5 points of increase in the average F-score (from 62.0 to 68.5) over a collection of 17 typologically diverse set of target languages. We release our source code for replicating our experiments at https://github.com/begab/sparsity_makes_sense.- Anthology ID:
 - 2022.naacl-main.176
 - Volume:
 - Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
 - Month:
 - July
 - Year:
 - 2022
 - Address:
 - Seattle, United States
 - Venue:
 - NAACL
 - SIG:
 - Publisher:
 - Association for Computational Linguistics
 - Note:
 - Pages:
 - 2459–2471
 - Language:
 - URL:
 - https://aclanthology.org/2022.naacl-main.176
 - DOI:
 - 10.18653/v1/2022.naacl-main.176
 - Cite (ACL):
 - Gábor Berend. 2022. Combating the Curse of Multilinguality in Cross-Lingual WSD by Aligning Sparse Contextualized Word Representations. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2459–2471, Seattle, United States. Association for Computational Linguistics.
 - Cite (Informal):
 - Combating the Curse of Multilinguality in Cross-Lingual WSD by Aligning Sparse Contextualized Word Representations (Berend, NAACL 2022)
 - PDF:
 - https://preview.aclanthology.org/ingestion-script-update/2022.naacl-main.176.pdf
 - Code
 - begab/sparsity_makes_sense
 - Data
 - word2word