@inproceedings{lee-etal-2018-comparative,
    title = "Comparative Studies of Detecting Abusive Language on {T}witter",
    author = "Lee, Younghun  and
      Yoon, Seunghyun  and
      Jung, Kyomin",
    editor = "Fi{\v{s}}er, Darja  and
      Huang, Ruihong  and
      Prabhakaran, Vinodkumar  and
      Voigt, Rob  and
      Waseem, Zeerak  and
      Wernimont, Jacqueline",
    booktitle = "Proceedings of the 2nd Workshop on Abusive Language Online ({ALW}2)",
    month = oct,
    year = "2018",
    address = "Brussels, Belgium",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/W18-5113/",
    doi = "10.18653/v1/W18-5113",
    pages = "101--106",
    abstract = "The context-dependent nature of online aggression makes annotating large collections of data extremely difficult. Previously studied datasets in abusive language detection have been insufficient in size to efficiently train deep learning models. Recently, \textit{Hate and Abusive Speech on Twitter}, a dataset much greater in size and reliability, has been released. However, this dataset has not been comprehensively studied to its potential. In this paper, we conduct the first comparative study of various learning models on \textit{Hate and Abusive Speech on Twitter}, and discuss the possibility of using additional features and context data for improvements. Experimental results show that bidirectional GRU networks trained on word-level features, with Latent Topic Clustering modules, is the most accurate model scoring 0.805 F1."
}Markdown (Informal)
[Comparative Studies of Detecting Abusive Language on Twitter](https://preview.aclanthology.org/ingest-emnlp/W18-5113/) (Lee et al., ALW 2018)
ACL