HapticLLaMA: A Multimodal Sensory Language Model for Haptic Captioning

Guimin Hu, Daniel Hershcovich, Hasti Seifi


Abstract
Haptic captioning is the task of generating natural language descriptions from haptic signals, such as vibrations, for use in virtual reality and rehabilitation applications. While previous multimodal research has focused primarily on vision and audio, haptic feedback for the sense of touch remain underexplored. To address this gap, we formalize the haptic captioning task and propose HapticLLaMA, a multimodal sensory language model that interprets vibration signals into descriptions in a given sensory, emotional, or associative category. We investigate two types of haptic tokenizers, a frequency-based tokenizer and an EnCodec-based tokenizer, that convert haptic signals into sequences of discrete units, enabling their integration with the LLaMA model. HapticLLaMA is trained in two stages: (1) supervised fine-tuning using the LLaMA architecture with LoRA-based adaptation, and (2) fine-tuning via reinforcement learning from human feedback (RLHF). We assess HapticLLaMA’s captioning performance using both automated n-gram metrics and human evaluation.HapticLLaMA demonstrates strong capability in interpreting haptic vibration signals, achieving a METEOR score of 59.98 and a BLEU-4 score of 32.06, respectively. Furthermore, over 64% of the generated captions received human ratings above 3.5 on a 7-point scale, with RLHF yielding a 13% improvement in the overall rating distribution, indicating stronger alignment with human haptic perception. These findings highlight the potential of large language models to process and adapt to sensory data.
Anthology ID:
2026.findings-eacl.166
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3180–3192
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.166/
DOI:
Bibkey:
Cite (ACL):
Guimin Hu, Daniel Hershcovich, and Hasti Seifi. 2026. HapticLLaMA: A Multimodal Sensory Language Model for Haptic Captioning. In Findings of the Association for Computational Linguistics: EACL 2026, pages 3180–3192, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
HapticLLaMA: A Multimodal Sensory Language Model for Haptic Captioning (Hu et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.166.pdf
Checklist:
 2026.findings-eacl.166.checklist.pdf