@inproceedings{zhang-etal-2025-beyond-online,
    title = "Beyond Online Sampling: Bridging Offline-to-Online Alignment via Dynamic Data Transformation for {LLM}s",
    author = "Zhang, Zhang  and
      Feng, Guhao  and
      Guan, Jian  and
      He, Di  and
      Wu, Wei",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1378/",
    pages = "27085--27097",
    ISBN = "979-8-89176-332-6",
    abstract = "While Direct Preference Optimization (DPO) eliminates complex reward modeling in aligning large language models (LLMs) with human preferences, its online variant faces significant efficiency bottlenecks due to costly real-time preference sampling and the reward model annotation. We propose a novel framework that bridges offline-to-online alignment by systematically transforming static datasets into dynamically adaptive equivalents, without the need for an explicit reward model. Our approach employs paraphrasing techniques to preserve response correctness while aligning data distributions with model-generated outputs, circumventing the need for resource-intensive online interactions. Experiments on mathematical reasoning and conversational tasks demonstrate that our method matches or exceeds the performance of a fully online DPO. This work establishes a computationally sustainable paradigm for LLM alignment, particularly benefiting scenarios requiring iterative preference updates and domain adaptation."
}Markdown (Informal)
[Beyond Online Sampling: Bridging Offline-to-Online Alignment via Dynamic Data Transformation for LLMs](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1378/) (Zhang et al., EMNLP 2025)
ACL