JUAGE at SemEval-2023 Task 10: Parameter Efficient Classification

Jeffrey Sorensen, Katerina Korre, John Pavlopoulos, Katrin Tomanek, Nithum Thain, Lucas Dixon, Léo Laugier


Abstract
Using pre-trained language models to implement classifiers from small to modest amounts of training data is an area of active research. The ability of large language models to generalize from few-shot examples and to produce strong classifiers is extended using the engineering approach of parameter-efficient tuning. Using the Explainable Detection of Online Sexism (EDOS) training data and a small number of trainable weights to create a tuned prompt vector, a competitive model for this task was built, which was top-ranked in Subtask B.
Anthology ID:
2023.semeval-1.166
Volume:
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Atul Kr. Ojha, A. Seza Doğruöz, Giovanni Da San Martino, Harish Tayyar Madabushi, Ritesh Kumar, Elisa Sartori
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1195–1203
Language:
URL:
https://aclanthology.org/2023.semeval-1.166
DOI:
10.18653/v1/2023.semeval-1.166
Bibkey:
Cite (ACL):
Jeffrey Sorensen, Katerina Korre, John Pavlopoulos, Katrin Tomanek, Nithum Thain, Lucas Dixon, and Léo Laugier. 2023. JUAGE at SemEval-2023 Task 10: Parameter Efficient Classification. In Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023), pages 1195–1203, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
JUAGE at SemEval-2023 Task 10: Parameter Efficient Classification (Sorensen et al., SemEval 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2023.semeval-1.166.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-3/2023.semeval-1.166.mp4