@inproceedings{li-etal-2025-reward,
    title = "Reward-Shifted Speculative Sampling Is An Efficient Test-Time Weak-to-Strong Aligner",
    author = "Li, Bolian  and
      Wu, Yanran  and
      Luo, Xinyu  and
      Zhang, Ruqi",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.578/",
    pages = "11479--11489",
    ISBN = "979-8-89176-332-6",
    abstract = "Aligning large language models (LLMs) with human preferences has become a critical step in their development. Recent research has increasingly focused on test-time alignment, where additional compute is allocated during inference to enhance LLM safety and reasoning capabilities. However, these test-time alignment techniques often incur substantial inference costs, limiting their practical application. We are inspired by the speculative sampling acceleration, which leverages a small draft model to efficiently predict future tokens, to address the efficiency bottleneck of test-time alignment. We introduce the reward-shifted speculative sampling (SSS) algorithm, in which the draft model is aligned with human preferences, while the target model remains unchanged. We theoretically demonstrate that the distributional shift between the aligned draft model and the unaligned target model can be exploited to recover the RLHF optimal solution without actually obtaining it, by modifying the acceptance criterion and bonus token distribution. Our algorithm achieves superior gold reward scores at a significantly reduced inference cost in test-time weak-to-strong alignment experiments, thereby validating both its effectiveness and efficiency."
}Markdown (Informal)
[Reward-Shifted Speculative Sampling Is An Efficient Test-Time Weak-to-Strong Aligner](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.578/) (Li et al., EMNLP 2025)
ACL