Zhegu@SMM4H-2022: The Pre-training Tweet & Claim Matching Makes Your Prediction Better

Pan He, Chen YuZe, Yanru Zhang


Abstract
SMM4H-2022 (CITATION) Task 2 is to detect whether containing premise in the tweets of users about COVID-19 on the social medias or their stances for the claims. In this paper, we propose Tweet Claim Matching (TCM), which is a new pre-training task constructed by the tweets and claims similarly to Next Sentence Prediction (NSP). We first continue to pre-train the standard pre-trained language models on the labelled dataset and then fine-tune them for obtaining better performance. Compared with the solid baseline (CITATION), we achieve the absolute improvement of 7.9% in Task 2a and obtain the SOTA results.
Anthology ID:
2022.smm4h-1.11
Volume:
Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
SMM4H
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
38–41
Language:
URL:
https://aclanthology.org/2022.smm4h-1.11
DOI:
Bibkey:
Cite (ACL):
Pan He, Chen YuZe, and Yanru Zhang. 2022. Zhegu@SMM4H-2022: The Pre-training Tweet & Claim Matching Makes Your Prediction Better. In Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task, pages 38–41, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Cite (Informal):
Zhegu@SMM4H-2022: The Pre-training Tweet & Claim Matching Makes Your Prediction Better (He et al., SMM4H 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/starsem-semeval-split/2022.smm4h-1.11.pdf