Contextual Embeddings for Ukrainian: A Large Language Model Approach to Word Sense Disambiguation

Yurii Laba, Volodymyr Mudryi, Dmytro Chaplynskyi, Mariana Romanyshyn, Oles Dobosevych


Abstract
This research proposes a novel approach to the Word Sense Disambiguation (WSD) task in the Ukrainian language based on supervised fine-tuning of a pre-trained Large Language Model (LLM) on the dataset generated in an unsupervised way to obtain better contextual embeddings for words with multiple senses. The paper presents a method for generating a new dataset for WSD evaluation in the Ukrainian language based on the SUM dictionary. We developed a comprehensive framework that facilitates the generation of WSD evaluation datasets, enables the use of different prediction strategies, LLMs, and pooling strategies, and generates multiple performance reports. Our approach shows 77,9% accuracy for lexical meaning prediction for homonyms.
Anthology ID:
2023.unlp-1.2
Volume:
Proceedings of the Second Ukrainian Natural Language Processing Workshop (UNLP)
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editor:
Mariana Romanyshyn
Venue:
UNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11–19
Language:
URL:
https://aclanthology.org/2023.unlp-1.2
DOI:
10.18653/v1/2023.unlp-1.2
Bibkey:
Cite (ACL):
Yurii Laba, Volodymyr Mudryi, Dmytro Chaplynskyi, Mariana Romanyshyn, and Oles Dobosevych. 2023. Contextual Embeddings for Ukrainian: A Large Language Model Approach to Word Sense Disambiguation. In Proceedings of the Second Ukrainian Natural Language Processing Workshop (UNLP), pages 11–19, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Contextual Embeddings for Ukrainian: A Large Language Model Approach to Word Sense Disambiguation (Laba et al., UNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2023.unlp-1.2.pdf
Video:
 https://preview.aclanthology.org/naacl24-info/2023.unlp-1.2.mp4