-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate with nnfabrik #2
Conversation
featurevis/main.py
Outdated
|
||
def make(self, key): | ||
dataloaders, model = self.trained_model_table().load_model(key=key) | ||
neuron_id = (self.selector_table & key).fetch1("neuron_id") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You cannot make this table need to be aware of neuron_id
. Rather, this information should already by used by the selector_table
(as part of key).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed via 0a03e1c.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to propose a few changes. But first of all, it works quite nicely. It's fully functional with all of my models.
One problem with the current implementation is the computation time: for each neuron and each train method, the model ensemble is constructed anew. This means that also the dataloaders have to be constructed, which takes up to minutes for a given model ensemble. I think the way that the models are built at the moment, we have to re-use ensembles that were built already.
A problem related to that is the cuda implementation. In core.py of the original featurevis codebase, there was no way to set model and input to cuda. For my large images, the compute time per neuron and per MEI for an ensemble of 5 is also quite slow (~1minute). A cuda flag will speed things up.
And lastly, we need a way to structure the arguments of gradient ascent in core.py, which is handled by the MEIMethod table. The functions (such as post_update
for example) need to be in separate module in the current format, and there's no way to specify arguments for those functions as of now. Let's discuss how we could generalize this!
update to use Konsti's MEIs
This PR integrates the MEI generation process into the nnfabrik framework.
See
mei_demo.ipynb
for a demonstration.