BSFA: Leveraging the Subspace Dichotomy to Accelerate Neural Network Training

WenJie Zhou, Bohan Wang, Wei Chen, Xueqi Cheng


Abstract
Recent studies (CITATION) highlight a fundamental dichotomy in deep learning optimization: Although parameter updates along the top eigendirections of the loss Hessian (Dom-space) capture most of the update magnitude, they often contribute minimally to loss reduction. In contrast, updates in the orthogonal component (Bulk-space) have smaller magnitudes but drive most learning progress.In this work, we further advance the understanding of this phenomenon and introduce the Bulk-Space-Filtration-Accelerator (BSFA), a novel plug-and-play framework. BSFA accelerates training by differentially scaling update components projected onto these distinct subspaces, simultaneously enhancing stability by moderating updates in the dominant subspace and boosting convergence speed by amplifying those in the bulk-space.To ensure BSFA is both practical and scalable for contemporary large models, we introduce two key innovations: an efficient estimator using Principal Component Analysis (PCA) on historical updates for fast subspace estimation, and a block-wise strategy that applies this estimation on a per-parameter-block basis. These designs make BSFA computationally tractable and highly effective.We demonstrate BSFA’s acceleration across various tasks, notably achieving approximately 2× speedup when pre-training LLaMA-72M on WikiText-103 and LLaMA-134M on OpenWebText compared to vanilla AdamW.
Anthology ID:
2025.emnlp-main.952
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18845–18860
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.952/
DOI:
Bibkey:
Cite (ACL):
WenJie Zhou, Bohan Wang, Wei Chen, and Xueqi Cheng. 2025. BSFA: Leveraging the Subspace Dichotomy to Accelerate Neural Network Training. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 18845–18860, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
BSFA: Leveraging the Subspace Dichotomy to Accelerate Neural Network Training (Zhou et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.952.pdf
Checklist:
 2025.emnlp-main.952.checklist.pdf