Yutong Bai
2025
AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time
Junyu Zhang
|
Runpei Dong
|
Han Wang
|
Xuying Ning
|
Haoran Geng
|
Peihao Li
|
Xialin He
|
Yutong Bai
|
Jitendra Malik
|
Saurabh Gupta
|
Huan Zhang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
This paper presents AlphaOne (𝛼1), a universal framework for modulating reasoning progress in large reasoning models (LRMs) at test time. 𝛼1 first introduces 𝛼 moment, which represents the scaled thinking phase with a universal parameter 𝛼.Within this scaled pre-𝛼 moment phase, it dynamically schedules slow thinking transitions by modeling the insertion of reasoning transition tokens as a Bernoulli stochastic process. After the 𝛼 moment, 𝛼1 deterministically terminates slow thinking with the end-of-thinking token, thereby fostering fast reasoning and efficient answer generation. This approach unifies and generalizes existing monotonic scaling methods by enabling flexible and dense slow-to-fast reasoning modulation. Extensive empirical studies on various challenging benchmarks across mathematical, coding, and scientific domains demonstrate 𝛼1‘s superior reasoning capability and efficiency. Project page: https://alphaone-project.github.io/.
2024
Learning Dynamic Multi-attribute Interest for Personalized Product Search
Yutong Bai
|
Zhicheng Dou
|
Ji-Rong Wen
Findings of the Association for Computational Linguistics: EMNLP 2024
Personalized product search aims to learn personalized preferences from search logs and adjust the ranking lists returned by engines. Previous studies have extensively explored excavating valuable features to build accurate interest profiles. However, they overlook that the user’s attention varies on product attributes(e.g., brand, category). Users may especially prefer specific attributes or switch their preferences between attributes dynamically. Instead, existing approaches mix up all attribute features and let the model automatically extract useful ones from rather complex scenarios. To solve this problem, in this paper, we propose a dynamic multi-attribute interest learning model to tackle the influences from attributes to user interests. Specifically, we design two interest profiling modules: attribute-centered and attribute-aware profiling. The former focuses on capturing the user’s preferences on a single attribute, while the latter focuses on addressing the interests correlated with multi-attribute within the search history. Besides, we devise a dynamic contribution weights strategy that sends explicit signals to the model to determine the impacts of different attributes better. Experimental results on large-scale datasets illustrate that our model significantly improves the results of existing methods.
Search
Fix author
Co-authors
- Runpei Dong 1
- Zhicheng Dou (窦志成) 1
- Haoran Geng 1
- Saurabh Gupta 1
- Xialin He 1
- show all...