Abstract
This paper reports on the investigation of using pre-trained language models for the identification of Irish verbal multiword expressions (vMWEs), comparing the results with the systems submitted for the PARSEME shared task edition 1.2. We compare the use of a monolingual BERT model for Irish (gaBERT) with multilingual BERT (mBERT), fine-tuned to perform MWE identification, presenting a series of experiments to explore the impact of hyperparameter tuning and dataset optimisation steps on these models. We compare the results of our optimised systems to those achieved by other systems submitted to the shared task, and present some best practices for minority languages addressing this task.- Anthology ID:
- 2022.mwe-1.13
- Volume:
- Proceedings of the 18th Workshop on Multiword Expressions @LREC2022
- Month:
- June
- Year:
- 2022
- Address:
- Marseille, France
- Editors:
- Archna Bhatia, Paul Cook, Shiva Taslimipoor, Marcos Garcia, Carlos Ramisch
- Venue:
- MWE
- SIG:
- SIGLEX
- Publisher:
- European Language Resources Association
- Note:
- Pages:
- 89–99
- Language:
- URL:
- https://aclanthology.org/2022.mwe-1.13
- DOI:
- Cite (ACL):
- Abigail Walsh, Teresa Lynn, and Jennifer Foster. 2022. A BERT’s Eye View: Identification of Irish Multiword Expressions Using Pre-trained Language Models. In Proceedings of the 18th Workshop on Multiword Expressions @LREC2022, pages 89–99, Marseille, France. European Language Resources Association.
- Cite (Informal):
- A BERT’s Eye View: Identification of Irish Multiword Expressions Using Pre-trained Language Models (Walsh et al., MWE 2022)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/2022.mwe-1.13.pdf