Le Xu


2025

pdf bib
StitchLLM: Serving LLMs, One Block at a Time
Bodun Hu | Shuozhe Li | Saurabh Agarwal | Myungjin Lee | Akshay Jajoo | Jiamin Li | Le Xu | Geon-Woo Kim | Donghyun Kim | Hong Xu | Amy Zhang | Aditya Akella
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

The rapid evolution of large language models (LLMs) has revolutionized natural language processing (NLP) tasks such as text generation, translation, and comprehension. However, the increasing computational demands and inference costs of these models present significant challenges. This study investigates the dynamic and efficient utilization of pre-trained weights from open-sourced LLMs of varying parameter sizes to achieve an optimal balance between computational efficiency and task performance. Drawing inspiration from the dual-process theory of human cognition, we introduce StitchLLM: a dynamic model routing framework that employs a powerful bottom model to process all queries, and uses a lightweight routing mechanism to allocate computational resources appropriately. Our novel framework optimizes efficiency and maintains performance, leveraging a trainable stitching layer for seamless integration of decoder layers across different LLMs. Experimental results demonstrate that StitchLLM improves system throughput while minimizing performance degradation, offering a flexible solution for deploying LLMs in resource-constrained settings.

2024

pdf bib
MOSEL: Inference Serving Using Dynamic Modality Selection
Bodun Hu | Le Xu | Jeongyoon Moon | Neeraja J Yadwadkar | Aditya Akella
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Rapid advancements over the years have helped machine learning models reach previously hard-to-achieve goals, sometimes even exceeding human capabilities. However, achieving desired accuracy comes at the cost of larger model sizes and increased computational demands. Thus, serving predictions from these models to meet any latency and cost requirements of applications remains a key challenge, despite recent work in building inference serving systems as well as algorithmic approaches that dynamically adapt models based on inputs. Our paper introduces a new form of dynamism, modality selection, where we adaptively choose modalities from inference inputs while maintaining the model quality. We introduce MOSEL, an automated inference serving system for multi-modal ML models that carefully picks input modalities per request based on user-defined performance and accuracy requirements. MOSEL exploits modality configurations extensively, improving system throughput by 3.6 × with an accuracy guarantee. It also reduces job completion times by 11× compared to modality-agnostic approaches.