Santiago Akle Serano


2025

pdf bib
SWAN: An Efficient and Scalable Approach for Long-Context Language Modeling
Krishna C Puvvada | Faisal Ladhak | Santiago Akle Serano | Cheng-Ping Hsieh | Shantanu Acharya | Somshubra Majumdar | Fei Jia | Samuel Kriman | Simeng Sun | Dima Rekesh | Boris Ginsburg
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

We present SWAN, a causal Transformer architecture in the decoder-only style that generalizes robustly to sequence lengths substantially longer than those seen during training. SWAN interleaves layers without positional encodings (NoPE) and sliding-window attention layers equipped with rotary positional encodings (SWA-RoPE), and applies a dynamic scaling mechanism for attention scores during inference. Experiments demonstrate that SWAN achieves strong length extrapolation without requiring additional long-context training. In addition, SWAN is more computationally efficient than the standard Transformer architecture, resulting in lower training cost and higher inference throughput. We further demonstrate that existing pre-trained decoder-only models can be adapted to the SWAN architecture with minimal continued training, enabling extended contexts. Overall, our work presents an effective approach for scaling language models to longer contexts in a robust and efficient manner.