-
Notifications
You must be signed in to change notification settings - Fork 290
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Difference on epochs in network-slimming #3
Comments
Hi, Thanks for your interest in our code! Yes, in the original network slimming paper, the epochs of ImageNet is 60. In this repo, we use the official Pytorch ImageNet training schedule which is 90 epochs. |
Hi @erichhhhho I'm an author of both this paper and the Network Slimming paper. Using 60 epochs in the original Network Slimming paper was due to the resource limit at that time, and there was a significant bug (a bug about activation functions in fc layers that was later found) in the original paper for the result on VGG-11 on ImageNet. So in this project, we fixed the bug and used 90 epochs (standard in many papers). |
@Eric-mingjie @liuzhuang13 I see. Thank you for your clarification. |
Btw, there is a bug in network-slimming cifar10 main_B.py (line 102) if args.refine:
AttributeError: 'Namespace' object has no attribute 'refine' The args.refine should already become args.scratch in your code, and will be redundant. |
Hi, @erichhhhho ! Thanks for pointing it out! I just pushed a fix. |
@liuzhuang13 I download the PyTorch model of scratch-E from the link of the trained model (https://github.com/Eric-mingjie/rethinking-network-pruning/tree/master/imagenet/network-slimming#models). I found that the value of |
The epoch is actually 90. Don't be bothered by the value of |
Hi, I found the default number of epochs in network-slimming(scratch training VGG-11) for Imagenet (which is 90 in the code) is different from the original paper, which is 60.
The text was updated successfully, but these errors were encountered: