Cardiff University at SemEval-2020 Task 6: Fine-tuning BERT for Domain-Specific Definition Classification

Shelan Jeawak, Luis Espinosa-Anke, Steven Schockaert


Abstract
We describe the system submitted to SemEval-2020 Task 6, Subtask 1. The aim of this subtask is to predict whether a given sentence contains a definition or not. Unsurprisingly, we found that strong results can be achieved by fine-tuning a pre-trained BERT language model. In this paper, we analyze the performance of this strategy. Among others, we show that results can be improved by using a two-step fine-tuning process, in which the BERT model is first fine-tuned on the full training set, and then further specialized towards a target domain.
Anthology ID:
2020.semeval-1.44
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
361–366
Language:
URL:
https://aclanthology.org/2020.semeval-1.44
DOI:
10.18653/v1/2020.semeval-1.44
Bibkey:
Cite (ACL):
Shelan Jeawak, Luis Espinosa-Anke, and Steven Schockaert. 2020. Cardiff University at SemEval-2020 Task 6: Fine-tuning BERT for Domain-Specific Definition Classification. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 361–366, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
Cardiff University at SemEval-2020 Task 6: Fine-tuning BERT for Domain-Specific Definition Classification (Jeawak et al., SemEval 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2020.semeval-1.44.pdf