This repository has been archived by the owner on Sep 18, 2024. It is now read-only.
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
You can continue the conversation there. Go to discussion →
Describe the issue:
Hello!
I've been working with NNI recently and I really like the Retiarii Mutation API.
While running one-shot NAS experiments on custom datasets with ENAS and Darts, I encountered a problem. All Trainer classes in
nni.retiarii.oneshot.pytorch
, (e.g., ENASTrainer) construct thetorch.utils.data.DataLoader
instances in_init_dataloader
. Furthermore, the_init_dataloader
function does a 50:50 split of the given PyTorchDataset
instance to construct the train and validation sets.However, this behaviour is a bit limiting. It is particularly problematic if custom datasets or datasets with predefined train-valid-test splits are used. Therefore, my question: would it be possible to change this behaviour?
Possible solutions:
DataLoader
instances directly to the Trainer class instead of constructing them in_init_dataloader
_init_dataloader
from the Trainer__init__
and make the function configurable for the userCurrently, I bypassed this problem by overwriting the behaviour of
_init_dataloader
.However, I believe that these changes would make the library more generally applicable to a broader range of use cases. Not sure if other people encountered this problem before.
Environment:
Thank you,
Thomas
The text was updated successfully, but these errors were encountered: