SensorLLM: Aligning Large Language Models with Motion Sensors for Human Activity Recognition

Zechen Li, Shohreh Deldari, Linyao Chen, Hao Xue, Flora D. Salim


Abstract
We introduce SensorLLM, a two-stage framework that enables Large Language Models (LLMs) to perform human activity recognition (HAR) from sensor time-series data. Despite their strong reasoning and generalization capabilities, LLMs remain underutilized for motion sensor data due to the lack of semantic context in time-series, computational constraints, and challenges in processing numerical inputs. SensorLLM addresses these limitations through a Sensor-Language Alignment stage, where the model aligns sensor inputs with trend descriptions. Special tokens are introduced to mark channel boundaries. This alignment enables LLMs to capture numerical variations, channel-specific features, and data of varying durations, without requiring human annotations. In the subsequent Task-Aware Tuning stage, we refine the model for HAR classification, achieving performance that matches or surpasses state-of-the-art methods. Our results demonstrate that SensorLLM evolves into an effective sensor learner, reasoner, and classifier through human-intuitive Sensor-Language Alignment, generalizing across diverse HAR datasets. We believe this work establishes a foundation for future research on time-series and text alignment, paving the way for foundation models in sensor data analysis. Our codes are available at https://github.com/zechenli03/SensorLLM.
Anthology ID:
2025.emnlp-main.19
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
354–379
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.19/
DOI:
Bibkey:
Cite (ACL):
Zechen Li, Shohreh Deldari, Linyao Chen, Hao Xue, and Flora D. Salim. 2025. SensorLLM: Aligning Large Language Models with Motion Sensors for Human Activity Recognition. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 354–379, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
SensorLLM: Aligning Large Language Models with Motion Sensors for Human Activity Recognition (Li et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.19.pdf
Checklist:
 2025.emnlp-main.19.checklist.pdf