-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]Modification of the finetune mechanism #177
Conversation
…ing string; adjust config; adjust trainer and context; create optimizer in the beginning of a routine
# Conflicts: # federatedscope/core/trainers/trainer.py
@@ -0,0 +1,20 @@ | |||
def use_diff(func): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rayrayraykk Please help me check this modification. Make sure it is consistent with my original implementation. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part looks good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job! It might help us to split the functionalities of the trainer. And @yxdyc can double-check the details of the trainer.
Please note that the modification of config (such as cfg.optimizer
-> cfg.traine.optimizer
) would affect a large range of existing files, and we should make sure the provided examples and the declarations have been updated.
g['lr'] = cfg.personalization.lr | ||
ctx.pFedMe_outer_lr = cfg.optimizer.lr | ||
g['lr'] = ctx.cfg.personalization.lr | ||
ctx.pFedMe_outer_lr = ctx.cfg.train.optimizer.lr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we have ctx.cfg.xxx
? @yxdyc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both are ok, as we don't usually change cfg.personalization.lr
during the FL courses
LGTM. The modification of configs can invalid the scripts of our three FL benchmarks. We can discuss this later that whether we need to make our three benchmarks compatible with FS 0.2.0. |
# Conflicts: # federatedscope/core/auxiliaries/utils.py # federatedscope/core/configs/cfg_fl_algo.py # federatedscope/core/configs/cfg_fl_setting.py # federatedscope/core/configs/cfg_hpo.py # federatedscope/core/configs/cfg_training.py # federatedscope/core/trainers/context.py # federatedscope/core/trainers/torch_trainer.py # federatedscope/core/trainers/trainer.py # federatedscope/core/worker/client.py
LGTM, but please rebase the master to request a new CI-test for format checks, thanks. |
modified accordingly |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This PR is mainly for the modification of the finetune mechansim (#148 ), but we also make small change for other functions as following
Finetune
cfg.federate
intocfg.train
as they are more relevant to the training, includinglocal_update_step
batch_or_epoch
cfg.finetune
andcfg.train
in the config to support different parameters for finetuning and training (e.g.optimizer.lr
)finetune
function in the basic trainerbenchmark
)Enums and Decorators
enums.py
to avoid using string and the inconsistency issuesdecorators.py
to keep the code cleanOptimizer
ctx.optimizer
in the beginning of the routine function rather than within thecontext
to solve The optimizer may track some values across different training routines #136To be discussed
@joneswong please check if the following modifications are appropriate
use_diff
is implemented by a decoratoruse_diff
.