Ryan Solgi
2026
FLAT-LLM: Fine-grained Low-rank Activation Space Transformation for Large Language Model Compression
Jiayi Tian | Ryan Solgi | Jinming Lu | Yifan Yang | Hai Li | Zheng Zhang
Findings of the Association for Computational Linguistics: EACL 2026
Jiayi Tian | Ryan Solgi | Jinming Lu | Yifan Yang | Hai Li | Zheng Zhang
Findings of the Association for Computational Linguistics: EACL 2026
Large Language Models (LLMs) have enabled remarkable progress in natural language processing, yet their high computational and memory demands pose challenges for deployment in resource-constrained environments. Although recent low-rank decomposition methods offer a promising path for structural compression, they often suffer from accuracy degradation, expensive calibration procedures, and result in inefficient model architectures that hinder real-world inference speedups. In this paper, we propose FLAT-LLM, a fast and accurate, training-free structural compression method based on fine-grained low-rank transformations in the activation space. Specifically, we reduce the hidden dimension by transforming the weights using truncated eigenvectors computed via head-wise Principal Component Analysis, and employ a greedy budget redistribution strategy to adaptively allocate ranks across decoders. FLAT-LLM achieves efficient and effective weight compression without recovery fine-tuning, which could complete the calibration within a few minutes.Evaluated across 5 models and 11 datasets, FLAT-LLM outperforms structural pruning baselines in generalization and downstream performance, while delivering inference speedups over decomposition-based methods.
2025
Saten: Sparse Augmented Tensor Networks for Post-Training Compression of Large Language Models
Ryan Solgi | Kai Zhen | Rupak Vignesh Swaminathan | Nathan Susanj | Athanasios Mouchtaris | Siegfried Kunzmann | Zheng Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025
Ryan Solgi | Kai Zhen | Rupak Vignesh Swaminathan | Nathan Susanj | Athanasios Mouchtaris | Siegfried Kunzmann | Zheng Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025
The efficient implementation of large language models (LLMs) is crucial for deployment on resource-constrained devices. Low-rank tensor compression techniques, such as tensor-train (TT) networks, have been widely studied for over-parameterized neural networks. However, their applications to compress pre-trained LLMs for downstream tasks (post-training) remains challenging due to the high-rank nature of pre-trained LLMs and the lack of access to pretraining data. In this study, we investigate low-rank tensorized LLMs during fine-tuning and propose sparse augmented tensor networks (Saten) to enhance their performance. The proposed Saten framework enables full model compression. Experimental results demonstrate that Saten enhances both accuracy and compression efficiency in tensorized language models, achieving state-of-the-art performance.