FedReFT: Federated Representation Fine-Tuning with All-But-Me Aggregation
Fatema Siddika, Md Anwar Hossen, Juan Pablo Munoz, Tanya G. Roosta, Anuj Sharma, Ali Jannesari
Abstract
Parameter-efficient fine-tuning (PEFT) adapts large pre-trained models by updating only a small subset of parameters. Recently, Representation Fine-Tuning (ReFT) has emerged as an effective alternative. ReFT shifts the fine-tuning paradigm from updating model weights to directly manipulating hidden representations that capture rich semantic information, and outperform state-of-the-art PEFTs in standalone settings. However, its application in Federated Learning (FL) remains challenging due to heterogeneity in clients’ data distributions, model capacities, and computational resources. To address these challenges, we introduce Federated Representation Fine-Tuning (FedReFT), a novel approach to fine-tune clients’ hidden representations. FedReFT applies sparse intervention layers to steer hidden representations directly, offering a lightweight and semantically rich fine-tuning alternative ideal for edge devices. However, representation-level updates are especially vulnerable to aggregation mismatch under different task heterogeneity, where naive averaging can corrupt semantic alignment. To mitigate this issue, we propose All-But-Me (ABM) aggregation, where each client receives the aggregated updates of others and partially incorporates them, enabling stable and personalized learning by balancing local focus with global knowledge. We further design an adaptive update strategy inspired by Test-Time Computing (TTC) to balance local and global contributions under heterogeneous conditions. FedReFT achieves state-of-the-art performance on commonsense reasoning, arithmetic reasoning, and GLUE benchmarks, while delivering 1x–49x higher parameter efficiency compared to leading LoRA-based methods.- Anthology ID:
- 2026.findings-eacl.227
- Volume:
- Findings of the Association for Computational Linguistics: EACL 2026
- Month:
- March
- Year:
- 2026
- Address:
- Rabat, Morocco
- Editors:
- Vera Demberg, Kentaro Inui, Lluís Marquez
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4341–4362
- Language:
- URL:
- https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.227/
- DOI:
- Cite (ACL):
- Fatema Siddika, Md Anwar Hossen, Juan Pablo Munoz, Tanya G. Roosta, Anuj Sharma, and Ali Jannesari. 2026. FedReFT: Federated Representation Fine-Tuning with All-But-Me Aggregation. In Findings of the Association for Computational Linguistics: EACL 2026, pages 4341–4362, Rabat, Morocco. Association for Computational Linguistics.
- Cite (Informal):
- FedReFT: Federated Representation Fine-Tuning with All-But-Me Aggregation (Siddika et al., Findings 2026)
- PDF:
- https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.227.pdf