Fine-Tuned Neural Models for Propaganda Detection at the Sentence and Fragment levels

Tariq Alhindi, Jonas Pfeiffer, Smaranda Muresan


Abstract
This paper presents the CUNLP submission for the NLP4IF 2019 shared-task on Fine-Grained Propaganda Detection. Our system finished 5th out of 26 teams on the sentence-level classification task and 5th out of 11 teams on the fragment-level classification task based on our scores on the blind test set. We present our models, a discussion of our ablation studies and experiments, and an analysis of our performance on all eighteen propaganda techniques present in the corpus of the shared task.
Anthology ID:
D19-5013
Volume:
Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Anna Feldman, Giovanni Da San Martino, Alberto Barrón-Cedeño, Chris Brew, Chris Leberknight, Preslav Nakov
Venue:
NLP4IF
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
98–102
Language:
URL:
https://aclanthology.org/D19-5013
DOI:
10.18653/v1/D19-5013
Bibkey:
Cite (ACL):
Tariq Alhindi, Jonas Pfeiffer, and Smaranda Muresan. 2019. Fine-Tuned Neural Models for Propaganda Detection at the Sentence and Fragment levels. In Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda, pages 98–102, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Fine-Tuned Neural Models for Propaganda Detection at the Sentence and Fragment levels (Alhindi et al., NLP4IF 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ml4al-ingestion/D19-5013.pdf