Abstract
This paper reports our submission to the shared Task 2: Identification of informative COVID-19 English tweets at W-NUT 2020. We attempted a few techniques, and we briefly explain here two models that showed promising results in tweet classification tasks: DistilBERT and FastText. DistilBERT achieves a F1 score of 0.7508 on the test set, which is the best of our submissions.- Anthology ID:
- 2020.wnut-1.56
- Volume:
- Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
- Venue:
- WNUT
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 399–403
- Language:
- URL:
- https://aclanthology.org/2020.wnut-1.56
- DOI:
- 10.18653/v1/2020.wnut-1.56
- Cite (ACL):
- Supriya Chanda, Eshita Nandy, and Sukomal Pal. 2020. IRLab@IITBHU at WNUT-2020 Task 2: Identification of informative COVID-19 English Tweets using BERT. In Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), pages 399–403, Online. Association for Computational Linguistics.
- Cite (Informal):
- IRLab@IITBHU at WNUT-2020 Task 2: Identification of informative COVID-19 English Tweets using BERT (Chanda et al., WNUT 2020)
- PDF:
- https://preview.aclanthology.org/ml4al-ingestion/2020.wnut-1.56.pdf
- Code
- VinAIResearch/COVID19Tweet