A Comparative Study of PEFT Methods for Python Code Generation

Johanna Männistö, Joseph Attieh, Jörg Tiedemann


Abstract
Fine-tuning language models incurs high costs in training, inference and storage. Parameter-efficient fine-tuning (PEFT) methods have emerged as a more cost-effective alternative to full fine-tuning. However, limited work has compared different PEFT approaches for tasks like code generation. In this study, we examine the effect of various PEFT training methods on model performance in the task of Python code generation. We fine-tune four model families, ranging from 124M to 7B parameters, using three PEFT approaches alongside standard full fine-tuning. Our findings reveal that the effectiveness of each PEFT method varies with the model size and the corpus used.
Anthology ID:
2025.nodalida-1.42
Volume:
Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)
Month:
march
Year:
2025
Address:
Tallinn, Estonia
Editors:
Richard Johansson, Sara Stymne
Venue:
NoDaLiDa
SIG:
Publisher:
University of Tartu Library
Note:
Pages:
390–396
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.nodalida-1.42/
DOI:
Bibkey:
Cite (ACL):
Johanna Männistö, Joseph Attieh, and Jörg Tiedemann. 2025. A Comparative Study of PEFT Methods for Python Code Generation. In Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), pages 390–396, Tallinn, Estonia. University of Tartu Library.
Cite (Informal):
A Comparative Study of PEFT Methods for Python Code Generation (Männistö et al., NoDaLiDa 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.nodalida-1.42.pdf