Abstract
The use of pre-trained language models such as BERT and ULMFiT has become increasingly popular in shared tasks, due to their powerful language modelling capabilities. Our entry to SemEval uses ERNIE 2.0, a language model which is pre-trained on a large number of tasks to enrich the semantic and syntactic information learned. ERNIE’s knowledge masking pre-training task is a unique method for learning about named entities, and we hypothesise that it may be of use in a dataset which is built on news headlines and which contains many named entities. We optimize the hyperparameters in a regression and classification model and find that the hyperparameters we selected helped to make bigger gains in the classification model than the regression model.- Anthology ID:
- 2020.semeval-1.137
- Volume:
- Proceedings of the Fourteenth Workshop on Semantic Evaluation
- Month:
- December
- Year:
- 2020
- Address:
- Barcelona (online)
- Editors:
- Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- International Committee for Computational Linguistics
- Note:
- Pages:
- 1049–1054
- Language:
- URL:
- https://aclanthology.org/2020.semeval-1.137
- DOI:
- 10.18653/v1/2020.semeval-1.137
- Cite (ACL):
- J. A. Meaney, Steven Wilson, and Walid Magdy. 2020. Smash at SemEval-2020 Task 7: Optimizing the Hyperparameters of ERNIE 2.0 for Humor Ranking and Rating. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1049–1054, Barcelona (online). International Committee for Computational Linguistics.
- Cite (Informal):
- Smash at SemEval-2020 Task 7: Optimizing the Hyperparameters of ERNIE 2.0 for Humor Ranking and Rating (Meaney et al., SemEval 2020)
- PDF:
- https://preview.aclanthology.org/landing_page/2020.semeval-1.137.pdf