RAV: Retrieval-Augmented Voting for Tactile Descriptions Without Training

Jinlin Wang, Yulong Ji, Hongyu Yang


Abstract
Tactile perception is essential for human-environment interaction, and deriving tactile descriptions from multimodal data is a key challenge for embodied intelligence to understand human perception. Conventional approaches relying on extensive parameter learning for multimodal perception are rigid and computationally inefficient. To address this, we introduce Retrieval-Augmented Voting (RAV), a parameter-free method that constructs visual-tactile cross-modal knowledge directly. RAV retrieves similar visual-tactile data for given visual and tactile inputs and generates tactile descriptions through a voting mechanism. In experiments, we applied three voting strategies, SyncVote, DualVote and WeightVote, achieving performance comparable to large-scale cross-modal models without training. Comparative experiments across datasets of varying quality—defined by annotation accuracy and data diversity—demonstrate that RAV’s performance improves with higher-quality data at no additional computational cost. Code, and model checkpoints are opensourced at https://github.com/PluteW/RAV.
Anthology ID:
2025.emnlp-main.315
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6198–6205
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.315/
DOI:
Bibkey:
Cite (ACL):
Jinlin Wang, Yulong Ji, and Hongyu Yang. 2025. RAV: Retrieval-Augmented Voting for Tactile Descriptions Without Training. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 6198–6205, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
RAV: Retrieval-Augmented Voting for Tactile Descriptions Without Training (Wang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.315.pdf
Checklist:
 2025.emnlp-main.315.checklist.pdf