Abstract
Large-scale, transformer-based language models such as GPT-2 are pretrained on diverse corpora scraped from the internet. Consequently, they are prone to generating non-normative text (i.e. in violation of social norms). We introduce a technique for fine-tuning GPT-2, using a policy gradient reinforcement learning technique and a normative text classifier to produce reward and punishment values. We evaluate our technique on five data sets using automated and human participant experiments. The normative text classifier is 81-90% accurate when compared to gold-standard human judgements of normative and non-normative generated text. Our normative fine-tuning technique is able to reduce non-normative text by 27-61%, depending on the data set.- Anthology ID:
- 2020.inlg-1.43
- Volume:
- Proceedings of the 13th International Conference on Natural Language Generation
- Month:
- December
- Year:
- 2020
- Address:
- Dublin, Ireland
- Editors:
- Brian Davis, Yvette Graham, John Kelleher, Yaji Sripada
- Venue:
- INLG
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 374–383
- Language:
- URL:
- https://aclanthology.org/2020.inlg-1.43
- DOI:
- 10.18653/v1/2020.inlg-1.43
- Cite (ACL):
- Xiangyu Peng, Siyan Li, Spencer Frazier, and Mark Riedl. 2020. Reducing Non-Normative Text Generation from Language Models. In Proceedings of the 13th International Conference on Natural Language Generation, pages 374–383, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Reducing Non-Normative Text Generation from Language Models (Peng et al., INLG 2020)
- PDF:
- https://preview.aclanthology.org/naacl-24-ws-corrections/2020.inlg-1.43.pdf
- Data
- ROCStories