Guang-Jie Ren


2025

pdf bib
Disambiguation in Conversational Question Answering in the Era of LLMs and Agents: A Survey
Mehrab Tanjim | Yeonjun In | Xiang Chen | Victor Bursztyn | Ryan A. Rossi | Sungchul Kim | Guang-Jie Ren | Vaishnavi Muppala | Shun Jiang | Yongsung Kim | Chanyoung Park
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Ambiguity remains a fundamental challenge in Natural Language Processing (NLP) due to the inherent complexity and flexibility of human language. With the advent of Large Language Models (LLMs), addressing ambiguity has become even more critical due to their expanded capabilities and applications. In the context of Conversational Question Answering (CQA), this paper explores the definition, forms, and implications of ambiguity for language driven systems, particularly in the context of LLMs. We define key terms and concepts, categorize various disambiguation approaches enabled by LLMs, and provide a comparative analysis of their advantages and disadvantages. We also explore publicly available datasets for benchmarking ambiguity detection and resolution techniques and highlight their relevance for ongoing research. Finally, we identify open problems and future research directions, especially in agentic settings, proposing areas for further investigation. By offering a comprehensive review of current research on ambiguities and disambiguation with LLMs, we aim to contribute to the development of more robust and reliable LLM-based systems.

pdf bib
Challenges and Remedies of Domain-Specific Classifiers as LLM Guardrails: Self-Harm as a Case Study
Bing Zhang | Guang-Jie Ren
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)

Context:Despite the impressive capabilities of Large Language Models (LLMs), they pose significant risks in many domains and therefore require guardrails throughout the lifecycle.Problem:Many such guardrails are trained as classifiers with domain-specific human text datasets obtained from sources such as social media and they achieve reasonable performance against closed-domain benchmarks. When deployed in the real world, however, the guardrails have to deal with machine text in an open domain, and their performance deteriorates drastically, rendering them almost unusable due to a high level of false refusal.Solution:In this paper, using a self-harm detector as an example, we demonstrate the specific challenges facing guardrail deployment due to the data drift between training and production environments. More specifically, we formed two hypotheses about the potential causes, i.e. closed vs. open domain, human vs. LLM-generated text, and conducted five experiments to explore various potential remedies, including their respective advantages and disadvantages.Evaluation:While focusing on one example, our experience and knowledge of LLM guardrails give us great confidence that our work contributes to a more thorough understanding of guardrail deployment and can be generalized as a methodology to build more robust domain-specific guardrails in real-world applications.

pdf bib
Evaluating Large Language Models with Enterprise Benchmarks
Bing Zhang | Mikio Takeuchi | Ryo Kawahara | Shubhi Asthana | Md. Maruf Hossain | Guang-Jie Ren | Kate Soule | Yifan Mai | Yada Zhu
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)

The advancement of large language models (LLMs) has led to a greater challenge of having a rigorous and systematic evaluation of complex tasks performed, especially in enterprise applications. Therefore, LLMs need to be benchmarked with enterprise datasets for a variety of NLP tasks. This work explores benchmarking strategies focused on LLM evaluation, with a specific emphasis on both English and Japanese. The proposed evaluation framework encompasses 25 publicly available domain-specific English benchmarks from diverse enterprise domains like financial services, legal, climate, cyber security, and 2 public Japanese finance benchmarks. The diverse performance of 8 models across different enterprise tasks highlights the importance of selecting the right model based on the specific requirements of each task. Code and prompts are available on GitHub.

2024

pdf bib
Don’t be my Doctor! Recognizing Healthcare Advice in Large Language Models
Kellen Tan Cheng | Anna Lisa Gentile | Pengyuan Li | Chad DeLuca | Guang-Jie Ren
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track

Large language models (LLMs) have seen increasing popularity in daily use, with their widespread adoption by many corporations as virtual assistants, chatbots, predictors, and many more. Their growing influence raises the need for safeguards and guardrails to ensure that the outputs from LLMs do not mislead or harm users. This is especially true for highly regulated domains such as healthcare, where misleading advice may influence users to unknowingly commit malpractice. Despite this vulnerability, the majority of guardrail benchmarking datasets do not focus enough on medical advice specifically. In this paper, we present the HeAL benchmark (HEalth Advice in LLMs), a health-advice benchmark dataset that has been manually curated and annotated to evaluate LLMs’ capability in recognizing health-advice - which we use to safeguard LLMs deployed in industrial settings. We use HeAL to assess several models and report a detailed analysis of the findings.