Team_Swift at SemEval-2020 Task 9: Tiny Data Specialists through Domain-Specific Pre-training on Code-Mixed Data

Aditya Malte, Pratik Bhavsar, Sushant Rathi


Abstract
Code-mixing is an interesting phenomenon where the speaker switches between two or more languages in the same text. In this paper, we describe an unconventional approach to tackling the SentiMix Hindi-English challenge (UID: aditya_malte). Instead of directly fine-tuning large contemporary Transformer models, we train our own domain-specific embeddings and make use of them for downstream tasks. We also discuss how this technique provides comparable performance while making for a much more deployable and lightweight model. It should be noted that we have achieved the stated results without using any ensembling techniques, thus respecting a paradigm of efficient and production-ready NLP. All relevant source code shall be made publicly available to encourage the usage and reproduction of the results.
Anthology ID:
2020.semeval-1.177
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Venues:
COLING | SemEval
SIGs:
SIGLEX | SIGSEM
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
1310–1315
Language:
URL:
https://aclanthology.org/2020.semeval-1.177
DOI:
10.18653/v1/2020.semeval-1.177
Bibkey:
Cite (ACL):
Aditya Malte, Pratik Bhavsar, and Sushant Rathi. 2020. Team_Swift at SemEval-2020 Task 9: Tiny Data Specialists through Domain-Specific Pre-training on Code-Mixed Data. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1310–1315, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
Team_Swift at SemEval-2020 Task 9: Tiny Data Specialists through Domain-Specific Pre-training on Code-Mixed Data (Malte et al., SemEval 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/2020.semeval-1.177.pdf
Data
SentiMix