Mengwei Xu
2025
Demystifying Small Language Models for Edge Deployment
Zhenyan Lu
|
Xiang Li
|
Dongqi Cai
|
Rongjie Yi
|
Fangming Liu
|
Wei Liu
|
Jian Luan
|
Xiwen Zhang
|
Nicholas D. Lane
|
Mengwei Xu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Small language models (SLMs) have emerged as a promising solution for deploying resource-constrained devices, such as smartphones and Web of Things. This work presents the first comprehensive study of over 60 SLMs such as Microsoft Phi and Google Gemma that are publicly accessible. Our findings show that state-of-the-art SLMs outperform 7B models in general tasks, proving their practical viability. However, SLMs’ in-context learning capabilities remain limited, and their efficiency has significant optimization potential. We identify key SLM optimization opportunities, including dynamic task-specific routing, model-hardware co-design, and vocabulary/KV cache compression. Overall, we expect the work to reveal an all-sided landscape of SLMs, benefiting the research community across algorithm, model, system, and hardware levels.
Search
Fix author
Co-authors
- Dongqi Cai 1
- Nicholas D. Lane 1
- Xiang Li (李翔) 1
- Fangming Liu 1
- Wei Liu 1
- show all...
Venues
- acl1