Challenges and Remedies of Domain-Specific Classifiers as LLM Guardrails: Self-Harm as a Case Study

Bing Zhang, Guang-Jie Ren


Abstract
Context:Despite the impressive capabilities of Large Language Models (LLMs), they pose significant risks in many domains and therefore require guardrails throughout the lifecycle.Problem:Many such guardrails are trained as classifiers with domain-specific human text datasets obtained from sources such as social media and they achieve reasonable performance against closed-domain benchmarks. When deployed in the real world, however, the guardrails have to deal with machine text in an open domain, and their performance deteriorates drastically, rendering them almost unusable due to a high level of false refusal.Solution:In this paper, using a self-harm detector as an example, we demonstrate the specific challenges facing guardrail deployment due to the data drift between training and production environments. More specifically, we formed two hypotheses about the potential causes, i.e. closed vs. open domain, human vs. LLM-generated text, and conducted five experiments to explore various potential remedies, including their respective advantages and disadvantages.Evaluation:While focusing on one example, our experience and knowledge of LLM guardrails give us great confidence that our work contributes to a more thorough understanding of guardrail deployment and can be generalized as a methodology to build more robust domain-specific guardrails in real-world applications.
Anthology ID:
2025.naacl-industry.15
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Weizhu Chen, Yi Yang, Mohammad Kachuee, Xue-Yong Fu
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
173–182
Language:
URL:
https://preview.aclanthology.org/moar-dois/2025.naacl-industry.15/
DOI:
10.18653/v1/2025.naacl-industry.15
Bibkey:
Cite (ACL):
Bing Zhang and Guang-Jie Ren. 2025. Challenges and Remedies of Domain-Specific Classifiers as LLM Guardrails: Self-Harm as a Case Study. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track), pages 173–182, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Challenges and Remedies of Domain-Specific Classifiers as LLM Guardrails: Self-Harm as a Case Study (Zhang & Ren, NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/moar-dois/2025.naacl-industry.15.pdf