Abstract
Text moderation for user generated content, which helps to promote healthy interaction among users, has been widely studied and many machine learning models have been proposed. In this work, we explore an alternative perspective by augmenting reactive reviews with proactive forecasting. Specifically, we propose a new concept text toxicity propensity to characterize the extent to which a text tends to attract toxic comments. Beta regression is then introduced to do the probabilistic modeling, which is demonstrated to function well in comprehensive experiments. We also propose an explanation method to communicate the model decision clearly. Both propensity scoring and interpretation benefit text moderation in a novel manner. Finally, the proposed scaling mechanism for the linear model offers useful insights beyond this work.- Anthology ID:
- 2021.emnlp-main.682
- Volume:
- Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2021
- Address:
- Online and Punta Cana, Dominican Republic
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 8667–8675
- Language:
- URL:
- https://aclanthology.org/2021.emnlp-main.682
- DOI:
- 10.18653/v1/2021.emnlp-main.682
- Cite (ACL):
- Fei Tan, Yifan Hu, Kevin Yen, and Changwei Hu. 2021. BERT-Beta: A Proactive Probabilistic Approach to Text Moderation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8667–8675, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- BERT-Beta: A Proactive Probabilistic Approach to Text Moderation (Tan et al., EMNLP 2021)
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/2021.emnlp-main.682.pdf