Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrate with nnfabrik #2

Merged
merged 45 commits into from
Feb 20, 2020
Merged

Conversation

christoph-blessing
Copy link
Member

This PR integrates the MEI generation process into the nnfabrik framework.

See mei_demo.ipynb for a demonstration.


def make(self, key):
dataloaders, model = self.trained_model_table().load_model(key=key)
neuron_id = (self.selector_table & key).fetch1("neuron_id")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You cannot make this table need to be aware of neuron_id. Rather, this information should already by used by the selector_table (as part of key).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed via 0a03e1c.

Copy link
Contributor

@KonstantinWilleke KonstantinWilleke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to propose a few changes. But first of all, it works quite nicely. It's fully functional with all of my models.

One problem with the current implementation is the computation time: for each neuron and each train method, the model ensemble is constructed anew. This means that also the dataloaders have to be constructed, which takes up to minutes for a given model ensemble. I think the way that the models are built at the moment, we have to re-use ensembles that were built already.

A problem related to that is the cuda implementation. In core.py of the original featurevis codebase, there was no way to set model and input to cuda. For my large images, the compute time per neuron and per MEI for an ensemble of 5 is also quite slow (~1minute). A cuda flag will speed things up.

And lastly, we need a way to structure the arguments of gradient ascent in core.py, which is handled by the MEIMethod table. The functions (such as post_update for example) need to be in separate module in the current format, and there's no way to specify arguments for those functions as of now. Let's discuss how we could generalize this!

@eywalker eywalker merged commit e114a1e into sinzlab:master Feb 20, 2020
MaxFBurg added a commit to MaxFBurg/mei that referenced this pull request Nov 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants