Karim Galliamov


2025

pdf bib
Enhancing RLHF with Human Gaze Modeling
Karim Galliamov | Ivan Titov | Ilya Pershin
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Reinforcement Learning from Human Feedback (RLHF) aligns language models with human preferences but faces efficiency challenges. We explore two approaches leveraging human gaze prediction to enhance RLHF: (1) gaze-aware reward models and (2) gaze-based distribution of sparse rewards at token level. Our experiments show gaze-informed RLHF achieves faster convergence while maintaining or slightly improving performance, reducing computational requirements during policy optimization. Human visual attention patterns provide valuable signals for policy training, suggesting a promising direction for improving RLHF efficiency through human-like attention mechanisms.