-
Notifications
You must be signed in to change notification settings - Fork 322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Linear warmup sched #138
Linear warmup sched #138
Conversation
Hello @ananyahjha93! Thanks for updating this PR.
Comment last updated at 2020-08-05 17:59:48 UTC |
def _test_against_closed_form(self, scheduler, closed_form_scheduler, epochs=10): | ||
targets = [] | ||
for epoch in range(epochs): | ||
closed_form_scheduler.step(epoch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this comes from earlier versions of pytorch where scheduler.step() was called before the epoch instead of after.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but do we still support these old versions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@justusschock current version of pytorch still defines _test_against_closed_form()
for now, all we need to do is define that function to support the old version. I have a warning and doc available for this
Codecov Report
@@ Coverage Diff @@
## master #138 +/- ##
==========================================
+ Coverage 91.14% 91.32% +0.17%
==========================================
Files 82 86 +4
Lines 4056 4172 +116
==========================================
+ Hits 3697 3810 +113
- Misses 359 362 +3
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
@Borda feedback on formatting as well, used https://github.com/psf/black on @nateraw 's advice. |
from pl_bolts.optimizers.lr_scheduler import LinearWarmupCosineAnnealingLR | ||
|
||
|
||
EPSILON = 1e-12 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there is an epsilon defined somewhere in lightning iirc, maybe use this to avoid duplication?
cc @Borda who might know, where it is ^^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Borda so I had this question about default epsilons in lightning, numpy.finfo(numpy.float32).eps
is 1.1920929e-07. Since learning rates used are up to 1e-6, we need a higher precision in schedulers. But, numpy.finfo(numpy.float64).eps
comes out to be 2.220446049250313e-16, which results in failing tests. So the ideal epsilon seemed 1e-12.
def _test_against_closed_form(self, scheduler, closed_form_scheduler, epochs=10): | ||
targets = [] | ||
for epoch in range(epochs): | ||
closed_form_scheduler.step(epoch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but do we still support these old versions?
a0afec2
to
35cb518
Compare
5767519
to
ce206aa
Compare
Before submitting
What does this PR do?
Fixes # (issue).
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃