DiSCo: Device-Server Collaborative LLM-based Text Streaming Services

Ting Sun, Penghan Wang, Fan Lai


Abstract
The rapid rise of large language models (LLMs) in text streaming services has introduced significant cost and Quality of Experience (QoE) challenges in serving millions of daily requests, especially in meeting Time-To-First-Token (TTFT) and Time-Between-Token (TBT) requirements for real-time interactions. Our real-world measurements show that both server-based and on-device deployments struggle to meet diverse QoE demands: server deployments face high costs and last-hop issues (e.g., Internet latency and dynamics), while on-device LLM inference is constrained by resources. We introduce , a device-server cooperative scheduler designed to optimize users’ QoE by adaptively routing requests and migrating response generation between endpoints while maintaining cost constraints. employs cost-aware scheduling, leveraging the predictable speed of on-device LLM inference with the flexible capacity of server-based inference to dispatch requests on the fly, while introducing a token-level migration mechanism to ensure consistent token delivery during migration. Evaluations on real-world workloads—including commercial services like OpenAI GPT and DeepSeek, and open-source deployments such as LLaMA3—show that can improve users’ QoE by reducing tail TTFT (11-52%) and mean TTFT (6-78%) across different model-device configurations, while dramatically reducing serving costs by up to 84% through its migration mechanism while maintaining comparable QoE levels.
Anthology ID:
2025.findings-acl.734
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14259–14277
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.734/
DOI:
Bibkey:
Cite (ACL):
Ting Sun, Penghan Wang, and Fan Lai. 2025. DiSCo: Device-Server Collaborative LLM-based Text Streaming Services. In Findings of the Association for Computational Linguistics: ACL 2025, pages 14259–14277, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
DiSCo: Device-Server Collaborative LLM-based Text Streaming Services (Sun et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.734.pdf