Abstract
This paper presents our contribution to SemEval-2021 Task 2: Multilingual and Cross-lingual Word-in-Context Disambiguation (MCL-WiC). Our experiments cover English (EN-EN) sub-track from the multilingual setting of the task. We experiment with several pre-trained language models and investigate an impact of different top-layers on fine-tuning. We find the combination of Cosine Similarity and ReLU activation leading to the most effective fine-tuning procedure. Our best model results in accuracy 92.7%, which is the fourth-best score in EN-EN sub-track.- Anthology ID:
- 2021.semeval-1.17
- Volume:
- Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
- Month:
- August
- Year:
- 2021
- Address:
- Online
- Venue:
- SemEval
- SIGs:
- SIGLEX | SIGSEM
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 163–168
- Language:
- URL:
- https://aclanthology.org/2021.semeval-1.17
- DOI:
- 10.18653/v1/2021.semeval-1.17
- Cite (ACL):
- Boris Zhestiankin and Maria Ponomareva. 2021. Zhestyatsky at SemEval-2021 Task 2: ReLU over Cosine Similarity for BERT Fine-tuning. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 163–168, Online. Association for Computational Linguistics.
- Cite (Informal):
- Zhestyatsky at SemEval-2021 Task 2: ReLU over Cosine Similarity for BERT Fine-tuning (Zhestiankin & Ponomareva, SemEval 2021)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2021.semeval-1.17.pdf
- Code
- zhestyatsky/MCL-WiC
- Data
- SuperGLUE, WiC