Desheng Zhang


2025

pdf bib
CoAlign: Uncertainty Calibration of LLM for Geospatial Repartition
Zejun Xie | Zhiqing Hong | Wenjun Lyu | Haotian Wang | Guang Wang | Desheng Zhang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)

With the rapid expansion of e-commerce and continuous urban evolution, Geospatial Repartition, dividing geographical regions into delivery zones, is essential to optimize various objectives, e.g., on-time delivery rate, for last-mile delivery. Recently, large language models (LLMs) have offered promising capabilities for integrating diverse contextual information that is beneficial for geospatial repartition. However, given the inherent uncertainty in LLMs, adapting them to practical usage in real-world repartition is nontrivial. Thus, we introduce CoAlign, a novel three-stage framework that calibrates LLM uncertainty to enable robust geospatial repartition by transforming the task into a ranking problem, integrating historical data with LLM-generated candidates. It first generates explainable candidate partitions with a multi-criteria strategy and then designs a novel conformal method to rank these candidates relative to historical partitions with coverage guarantees. Finally, CoAlign delivers candidates through an interactive decision support system. Extensive evaluation with real-world data shows that CoAlign effectively calibrates LLM uncertainty and generates partitions that better align with human feedback. Moreover, we have deployed CoAlign in one of the world’s largest logistics companies, significantly enhancing their delivery operations by increasing candidate acceptance rates by 300% and improving on-time delivery rates by 3%. Our work provides a novel angle to address industrial geospatial decision-making tasks by calibrating LLM uncertainty.

2024

pdf bib
Variational Language Concepts for Interpreting Foundation Language Models
Hengyi Wang | Shiwei Tan | Zhiqing Hong | Desheng Zhang | Hao Wang
Findings of the Association for Computational Linguistics: EMNLP 2024

Foundation Language Models (FLMs) such as BERT and its variants have achieved remarkable success in natural language processing. To date, the interpretability of FLMs has primarily relied on the attention weights in their self-attention layers. However, these attention weights only provide word-level interpretations, failing to capture higher-level structures, and are therefore lacking in readability and intuitiveness. To address this challenge, we first provide a formal definition of *conceptual interpretation* and then propose a variational Bayesian framework, dubbed VAriational Language Concept (VALC), to go beyond word-level interpretations and provide concept-level interpretations. Our theoretical analysis shows that our VALC finds the optimal language concepts to interpret FLM predictions. Empirical results on several real-world datasets show that our method can successfully provide conceptual interpretation for FLMs.