@inproceedings{fayoumi-yeniterzi-2020-su,
    title = "{SU}-{NLP} at {WNUT}-2020 Task 2: The Ensemble Models",
    author = "Fayoumi, Kenan  and
      Yeniterzi, Reyyan",
    editor = "Xu, Wei  and
      Ritter, Alan  and
      Baldwin, Tim  and
      Rahimi, Afshin",
    booktitle = "Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2020.wnut-1.61/",
    doi = "10.18653/v1/2020.wnut-1.61",
    pages = "423--427",
    abstract = "In this paper, we address the problem of identifying informative tweets related to COVID-19 in the form of a binary classification task as part of our submission for W-NUT 2020 Task 2. Specifically, we focus on ensembling methods to boost the classification performance of classification models such as BERT and CNN. We show that ensembling can reduce the variance in performance, specifically for BERT base models."
}Markdown (Informal)
[SU-NLP at WNUT-2020 Task 2: The Ensemble Models](https://preview.aclanthology.org/ingest-emnlp/2020.wnut-1.61/) (Fayoumi & Yeniterzi, WNUT 2020)
ACL
- Kenan Fayoumi and Reyyan Yeniterzi. 2020. SU-NLP at WNUT-2020 Task 2: The Ensemble Models. In Proceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020), pages 423–427, Online. Association for Computational Linguistics.