Yuan Ling


2025

pdf bib
A Calibrated Reflection Approach for Enhancing Confidence Estimation in LLMs
Umesh Bodhwani | Yuan Ling | Shujing Dong | Yarong Feng | Hongfei Li | Ayush Goyal
Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025)

A critical challenge in deploying Large Language Models (LLMs) is developing reliable mechanisms to estimate their confidence, enabling systems to determine when to trust model outputs and when to seek human intervention. In this paper, we present a Calibrated Reflection Approach for Enhancing Confidence Estimation in LLMs, a framework that combines structured reasoning with distance-aware calibration techniques. Our approach introduces three key innovations: (1) a Maximum Confidence Selection (MCS) method that comprehensively evaluates confidence across all possible labels, (2) a reflection-based prompting mechanism that enhances reasoning reliability, and (3) a distance-aware calibration technique that accounts for ordinal relationships between labels. We evaluate our framework across diverse datasets, including HelpSteer2, Llama T-REx, and an internal conversational dataset, demonstrating its effectiveness across both conversational and fact-based classification tasks. This work contributes to the broader goal of developing reliable and well-calibrated confidence estimation methods for LLMs, enabling informed decisions about when to trust model outputs and when to defer to human judgement.

2018

pdf bib
DR-BiLSTM: Dependent Reading Bidirectional LSTM for Natural Language Inference
Reza Ghaeini | Sadid A. Hasan | Vivek Datla | Joey Liu | Kathy Lee | Ashequl Qadir | Yuan Ling | Aaditya Prakash | Xiaoli Fern | Oladimeji Farri
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

We present a novel deep learning architecture to address the natural language inference (NLI) task. Existing approaches mostly rely on simple reading mechanisms for independent encoding of the premise and hypothesis. Instead, we propose a novel dependent reading bidirectional LSTM network (DR-BiLSTM) to efficiently model the relationship between a premise and a hypothesis during encoding and inference. We also introduce a sophisticated ensemble strategy to combine our proposed models, which noticeably improves final predictions. Finally, we demonstrate how the results can be improved further with an additional preprocessing step. Our evaluation shows that DR-BiLSTM obtains the best single model and ensemble model results achieving the new state-of-the-art scores on the Stanford NLI dataset.

2017

pdf bib
Learning to Diagnose: Assimilating Clinical Narratives using Deep Reinforcement Learning
Yuan Ling | Sadid A. Hasan | Vivek Datla | Ashequl Qadir | Kathy Lee | Joey Liu | Oladimeji Farri
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

Clinical diagnosis is a critical and non-trivial aspect of patient care which often requires significant medical research and investigation based on an underlying clinical scenario. This paper proposes a novel approach by formulating clinical diagnosis as a reinforcement learning problem. During training, the reinforcement learning agent mimics the clinician’s cognitive process and learns the optimal policy to obtain the most appropriate diagnoses for a clinical narrative. This is achieved through an iterative search for candidate diagnoses from external knowledge sources via a sentence-by-sentence analysis of the inherent clinical context. A deep Q-network architecture is trained to optimize a reward function that measures the accuracy of the candidate diagnoses. Experiments on the TREC CDS datasets demonstrate the effectiveness of our system over various non-reinforcement learning-based systems.

pdf bib
Improving Clinical Diagnosis Inference through Integration of Structured and Unstructured Knowledge
Yuan Ling | Yuan An | Sadid Hasan
Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications

This paper presents a novel approach to the task of automatically inferring the most probable diagnosis from a given clinical narrative. Structured Knowledge Bases (KBs) can be useful for such complex tasks but not sufficient. Hence, we leverage a vast amount of unstructured free text to integrate with structured KBs. The key innovative ideas include building a concept graph from both structured and unstructured knowledge sources and ranking the diagnosis concepts using the enhanced word embedding vectors learned from integrated sources. Experiments on the TREC CDS and HumanDx datasets showed that our methods improved the results of clinical diagnosis inference.