Sungho Jang
2025
PLEX: Adaptive Parameter-Efficient Fine-Tuning for Code LLMs using Lottery-Tickets
Jaeseong Lee
|
Hojae Han
|
Jongyoon Kim
|
Seung-won Hwang
|
Naun Kang
|
KyungJun An
|
Sungho Jang
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)
Fine-tuning large language models (LLMs) for code generation is challenging due to computational costs and the underrepresentation of some programming languages (PLs) in pre-training. We propose PLEX, a lottery-ticket based parameter-efficient fine-tuning (PEFT) method that adapts LLMs to either well-supported and underrepresented PLs.During lottery ticket selection, PLEX employs a dual strategy: for well-represented PLs, it leverages the LLM’s full parametric knowledge by selecting from full layers, while for underrepresented PLs, it narrows the selection scope to dense layers, prioritizing the most influential parameters.Additionally, PLEX-E, a low-rank extension of PLEX, further reduces computational costs by limiting the scope of fine-tuning. On MultiPL-E benchmarks, PLEX achieves state-of-the-art performance among PEFT methods, while PLEX-E maintains competitive results with reduced computational overhead. Both variants demonstrate effective adaptation across diverse programming languages, particularly for those underrepresented in pre-training.
Search
Fix data
Co-authors
- KyungJun An 1
- Hojae Han 1
- Seung-won Hwang 1
- Naun Kang 1
- Jongyoon Kim 1
- show all...