Chad DeLuca


2025

pdf bib
STAR: Self-Automated Back-Querying for Production Data Generation
Kellen Tan Cheng | Anna Lisa Gentile | Chad DeLuca | Guang-Jie Ren
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics

The pervasiveness of large language models (LLMs) in enterprise settings has also brought forth a significant amount of risks associated with their usage. Guardrails technologies aim to mitigate this risk by filtering LLMs’ input/output text through various detectors. However, developing and maintaining robust detectors has many challenges, one of which is the difficulty in acquiring production-quality labeled data on real LLM outputs before deployment. In this work, we propose STAR, a simple yet intuitive solution to generate production-like labeled data for LLMs’ guardrails development. STAR is based on two key ideas: (i) using self-automated back-querying to synthetically generate data, paired with (ii) a sparse human-in-the-loop clustering technique to label the data. The aim of self-automated back-querying is to construct a parallel corpus roughly representative of the original dataset and resembling real LLM output. We then infuse existing datasets with our synthetically generated examples to produce robust training data for our detectors. We test our technique on one of the most difficult and nuanced detectors: the identification of health advice in LLM output, and demonstrate improvement versus other solutions. Our detector is able to outperform GPT-4o by up to 3.48%, despite having 400x less parameters.

2024

pdf bib
Don’t be my Doctor! Recognizing Healthcare Advice in Large Language Models
Kellen Tan Cheng | Anna Lisa Gentile | Pengyuan Li | Chad DeLuca | Guang-Jie Ren
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track

Large language models (LLMs) have seen increasing popularity in daily use, with their widespread adoption by many corporations as virtual assistants, chatbots, predictors, and many more. Their growing influence raises the need for safeguards and guardrails to ensure that the outputs from LLMs do not mislead or harm users. This is especially true for highly regulated domains such as healthcare, where misleading advice may influence users to unknowingly commit malpractice. Despite this vulnerability, the majority of guardrail benchmarking datasets do not focus enough on medical advice specifically. In this paper, we present the HeAL benchmark (HEalth Advice in LLMs), a health-advice benchmark dataset that has been manually curated and annotated to evaluate LLMs’ capability in recognizing health-advice - which we use to safeguard LLMs deployed in industrial settings. We use HeAL to assess several models and report a detailed analysis of the findings.