-
Notifications
You must be signed in to change notification settings - Fork 1.8k
NAS documents general improvements (v2.8) #4942
Conversation
docs/source/nas/evaluator.rst
Outdated
... | ||
|
||
# Use a callable returning a model | ||
evaluator.evaluate(Model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this Model
is a model space or a regular pytorch model?
with fixed_arch(exported_model): | ||
model = Model() | ||
# Then use evaluator.evaluate | ||
evaluator.evaluate(model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to add the note
here to tell users that the same evaluator
instance should not be used both for training supernet and for evaluating a subnet?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the problem is restricted to pytorch-lightning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok. BTW, this example implies that the Model
cannot have init arguments, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. There is no such implication.
@@ -75,7 +75,14 @@ Starting from v2.8, the usage of one-shot strategies are much alike to multi-tri | |||
|
|||
import nni.retiarii.strategy as strategy | |||
import nni.retiarii.evaluator.pytorch.lightning as pl | |||
evaluator = pl.Classification(...) | |||
evaluator = pl.Classification( | |||
# Need to use pl.DataLoader instead of torch.utils.data.DataLoader here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might be better to addd one more sentence: users can refer to "serializer" doc if they stick to torch.utils.data.DataLoader
""" | ||
warnings.warn( | ||
'Direct export from RandomOneShot returns an arbitrary architecture. ' | ||
'You might want to use the checkpoint in another search.', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-> Sampling the best architecture from this well trained supernet is another search process, users can use the supernet's checkpoint to do another search.
baseline_decay : float | ||
Decay factor of baseline. New baseline will be equal to ``baseline_decay * baseline_old + reward * (1 - baseline_decay)``. | ||
Decay factor of reward baseline, which is used to normalize the reward in RL. | ||
At each step, the new ew baseline will be equal to ``baseline_decay * baseline_old + reward * (1 - baseline_decay)``. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ew baseline?
Description
Updates some docstrings and add more explanations.
Test Options
Checklist
How to test
N/A