@inproceedings{zhang-ren-2025-challenges,
title = "Challenges and Remedies of Domain-Specific Classifiers as {LLM} Guardrails: Self-Harm as a Case Study",
author = "Zhang, Bing and
Ren, Guang-Jie",
editor = "Chen, Weizhu and
Yang, Yi and
Kachuee, Mohammad and
Fu, Xue-Yong",
booktitle = "Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)",
month = apr,
year = "2025",
address = "Albuquerque, New Mexico",
publisher = "Association for Computational Linguistics",
url = "https://preview.aclanthology.org/moar-dois/2025.naacl-industry.15/",
doi = "10.18653/v1/2025.naacl-industry.15",
pages = "173--182",
ISBN = "979-8-89176-194-0",
abstract = "Context:Despite the impressive capabilities of Large Language Models (LLMs), they pose significant risks in many domains and therefore require guardrails throughout the lifecycle.Problem:Many such guardrails are trained as classifiers with domain-specific human text datasets obtained from sources such as social media and they achieve reasonable performance against closed-domain benchmarks. When deployed in the real world, however, the guardrails have to deal with machine text in an open domain, and their performance deteriorates drastically, rendering them almost unusable due to a high level of false refusal.Solution:In this paper, using a self-harm detector as an example, we demonstrate the specific challenges facing guardrail deployment due to the data drift between training and production environments. More specifically, we formed two hypotheses about the potential causes, i.e. closed vs. open domain, human vs. LLM-generated text, and conducted five experiments to explore various potential remedies, including their respective advantages and disadvantages.Evaluation:While focusing on one example, our experience and knowledge of LLM guardrails give us great confidence that our work contributes to a more thorough understanding of guardrail deployment and can be generalized as a methodology to build more robust domain-specific guardrails in real-world applications."
}
Markdown (Informal)
[Challenges and Remedies of Domain-Specific Classifiers as LLM Guardrails: Self-Harm as a Case Study](https://preview.aclanthology.org/moar-dois/2025.naacl-industry.15/) (Zhang & Ren, NAACL 2025)
ACL