@inproceedings{madanagopal-caverlee-2023-reinforced,
    title = "Reinforced Sequence Training based Subjective Bias Correction",
    author = "Madanagopal, Karthic  and
      Caverlee, James",
    editor = "Vlachos, Andreas  and
      Augenstein, Isabelle",
    booktitle = "Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics",
    month = may,
    year = "2023",
    address = "Dubrovnik, Croatia",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.eacl-main.189/",
    doi = "10.18653/v1/2023.eacl-main.189",
    pages = "2585--2598",
    abstract = "Subjective bias is ubiquitous on news sites, social media, and knowledge resources like Wikipedia. Many existing methods for subjective bias correction have typically focused on making one-word edits and have been trained over a single (often, noisy) domain. In contrast, we propose a novel reinforced sequence training approach for robust subjective bias correction. Three of the unique characteristics of the approach are: (i) it balances bias neutralization with fluency and semantics preservation through reinforcement learning, to broaden the scope to bias beyond a single word; (ii) it is cross-trained over multiple sources of bias to be more robust to new styles of biased writing that are not seen in the training data for a single domain; and (iii) it is used to fine-tune a large pre-trained transformer model to yield state-of-the-art performance in bias text correction task. Extensive experiments show that the proposed approach results in significant improvements in subjective bias correction versus alternatives."
}Markdown (Informal)
[Reinforced Sequence Training based Subjective Bias Correction](https://preview.aclanthology.org/ingest-emnlp/2023.eacl-main.189/) (Madanagopal & Caverlee, EACL 2023)
ACL