A Comparative Study of PEFT Methods for Python Code Generation

dc.contributor.authorMännistö, Johanna
dc.contributor.authorAttieh, Joseph
dc.contributor.authorTiedemann, Jörg
dc.contributor.editorJohansson, Richard
dc.contributor.editorStymne, Sara
dc.coverage.spatialTallinn, Estonia
dc.date.accessioned2025-02-18T13:45:48Z
dc.date.available2025-02-18T13:45:48Z
dc.date.issued2025-03
dc.description.abstractFine-tuning language models incurs high costs in training, inference and storage. Parameter-efficient fine-tuning (PEFT) methods have emerged as a more cost-effective alternative to full fine-tuning. However, limited work has compared different PEFT approaches for tasks like code generation. In this study, we examine the effect of various PEFT training methods on model performance in the task of Python code generation. We fine-tune four model families, ranging from 124M to 7B parameters, using three PEFT approaches alongside standard full fine-tuning. Our findings reveal that the effectiveness of each PEFT method varies with the model size and the corpus used.
dc.identifier.urihttps://hdl.handle.net/10062/107234
dc.language.isoen
dc.publisherUniversity of Tartu Library
dc.relation.ispartofseriesNEALT Proceedings Series, No. 57
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.titleA Comparative Study of PEFT Methods for Python Code Generation
dc.typeArticle

Failid

Originaal pakett

Nüüd näidatakse 1 - 1 1
Laen...
Pisipilt
Nimi:
2025_nodalida_1_42.pdf
Suurus:
296.38 KB
Formaat:
Adobe Portable Document Format