@inproceedings{mapes-etal-2019-divisive,
    title = "Divisive Language and Propaganda Detection using Multi-head Attention Transformers with Deep Learning {BERT}-based Language Models for Binary Classification",
    author = "Mapes, Norman  and
      White, Anna  and
      Medury, Radhika  and
      Dua, Sumeet",
    editor = "Feldman, Anna  and
      Da San Martino, Giovanni  and
      Barr{\'o}n-Cede{\~n}o, Alberto  and
      Brew, Chris  and
      Leberknight, Chris  and
      Nakov, Preslav",
    booktitle = "Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/D19-5014/",
    doi = "10.18653/v1/D19-5014",
    pages = "103--106",
    abstract = "On the NLP4IF 2019 sentence level propaganda classification task, we used a BERT language model that was pre-trained on Wikipedia and BookCorpus as team ltuorp ranking {\#}1 of 26. It uses deep learning in the form of an attention transformer. We substituted the final layer of the neural network to a linear real valued output neuron from a layer of softmaxes. The backpropagation trained the entire neural network and not just the last layer. Training took 3 epochs and on our computation resources this took approximately one day. The pre-trained model consisted of uncased words and there were 12-layers, 768-hidden neurons with 12-heads for a total of 110 million parameters. The articles used in the training data promote divisive language similar to state-actor-funded influence operations on social media. Twitter shows state-sponsored examples designed to maximize division occurring across political lines, ranging from ``Obama calls me a clinger, Hillary calls me deplorable, ... and Trump calls me an American'' oriented to the political right, to Russian propaganda featuring ``Black Lives Matter'' material with suggestions of institutional racism in US police forces oriented to the political left. We hope that raising awareness through our work will reduce the polarizing dialogue for the betterment of nations."
}Markdown (Informal)
[Divisive Language and Propaganda Detection using Multi-head Attention Transformers with Deep Learning BERT-based Language Models for Binary Classification](https://preview.aclanthology.org/iwcs-25-ingestion/D19-5014/) (Mapes et al., NLP4IF 2019)
ACL