-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a build model_creation example? #445
Comments
Hi, thanks for reporting the broken link! I updated the example links in the bioimageio.core docs here and created bioimage-io/bioimage.io#439 to start updating our central docs. The model_creation notebook was revised and republished as model_usage.ipynb (as now listed under the examples) That notebook starts off with loading an existing resource from the zoo to get you started faster. |
Hi, sorry to butt in here, but I'm also attempting to create a Bioimage.io model from a pytorch model and I honestly had no idea where to start before I saw the presentation you've linked. So just to confirm if I'm reading your presentation correctly - do I need 1) the model source code to load in the model architecture (via Thanks in advance! |
Hi @mattaq31 , thank you for your question! We encourage contributors to provide as many as possible, however a single weights format is the minimum. When possible its great to convert pytorch state dicht weights (which require executing the source code for the architecture as I'm sure you're familiar with) to torchscript (which does not require the original source code and specific Python environment making it far easier to deploy). |
Hi @FynnBe thanks for getting back to me so quickly during the holiday period! Also thanks for clarifying the different options available when choosing which weights to upload. Eventually, I managed to assemble a bioimage.io model for my torchscript weights and proceeded to run the validation script which ran with no errors. For more info, I am preparing a U-Net segmentation model which was trained within PyTorch that accepts single channel images and outputs dual channel images of the same size. However, I have a couple more questions (apologies in advance for this large dump of text!):
In general, the documentation available for preparing a new BioImage.IO model from scratch seems to be very terse, I had to guess many of the arguments to the Thanks again for your help and apologies once again for the large wall of text but I wasn't sure where to reach out otherwise! |
Hi again @FynnBe, to quickly update, I've managed to validate the model works in DeepImageJ (although that also has its own share of bugs and issues). I'm about to upload the model to BioImage.IO but I am unable to login to the uploader form system (https://bioimageio-uploader.netlify.app/). When I try going ahead and uploading anyway, I'm getting various errors such as:
Would you be able to let me know if there are any workarounds for this issue? Thanks! |
You can specify preprocessing, eg: https://github.com/instanseg/instanseg/blob/ebe5ef4608ef026b7e0f4a39f8a1e68667582058/instanseg/utils/create_bioimageio_model.py#L273C1-L273C156 |
RE bioimage.io login, I see you created bioimage-io/bioimage.io#440
No, but I'd suggest to use percentile normalization for all inputs (with additional clipping if you cannot have values outside [0,1]).
One of our goals is inter-operability. The more custom code you need around your model to make it useful the fewer tools (e.g. only QuPath or only DeepImageJ) can make sense of it.
Thank you for taking a look and giving your valuable feedback! I'll keep working on libraries themselves and their documentation. |
Great, thanks for highlighting the login issue with the team! Re. the pre-processing, I'm adhering to the spec used for our model preprint experiments for now but yes percentile normalization of all image types works too. Future iterations of our model should be able to simplify the preprocessing/postprocessing further. I agree with the principle you've outlined for BioImage.IO models in general, but in this case without the facility to channel-squish, pad and unpad images I will still have to surround this particular model with custom code for pre and post-processing. The model I'm working with here is a first-generation type system and in this particular case I'm going to leave it as-is to ensure uniformity across our releases. Re. documenting the library - I appreciate that the documentation is available (I did check out those links when I was assembling my own model) but the way it's written was difficult for me to interpret as someone who isn't that familiar with the syntax presented. For example, to create a preprocessing module, your presentation example is set as follows: preprocessing = [ScaleRangeDescr(
kwargs=ScaleRangeKwargs(
axes= (AxisId('y'), AxisId('x')),
max_percentile= 99.8,
min_percentile= 5.0,
)
)] This means that I would have to realize that A) a list is needed and B) the preprocessing module has three separate classes that need to be imported (ScaleRangeDescr, ScaleRangeKwargs, AxisId) for which the documentation on each is split across different parts of the page and no examples were available. Perhaps some form of 'how to read the docs' guide would be able to simplify understanding things, but I would have much preferred a pytorch-style system where most functions/classes have a mathematical description as well as examples all contained on the same page (e.g. https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html#torch.nn.Conv2d). Others might be better than me at reading the docs but as a first-time user I was pretty confused :) Thanks again for your help here! |
If you specify a
Thank you for sharing! Inspired by your feedback I decided to add some examples to the docs (#681). I hope that will help get people started coding their own descriptions more easily. |
Looks like the v0.5 version of OutputTensorDescr doesn't take a halo though, right? Link here. Also would the |
Halo is now part of the axis specification rather than the tensor specification, eg |
Unfortunately |
Thanks both for the details on the halo syntax. Given that it's not currently accessible to deepimagej, I'll probably circumvent this issue by building in the padding/unpadding into the torchscript module itself. I think this could still be a nice thing to include for deepimagej in the future though! |
Actually, it turns out that deepimagej is indeed somehow taking care of padding and unpadding by itself! If I use the below for my inputs: SpaceInputAxis(
id=AxisId('x'),
size=ParameterizedSize(min=32, step=32),
scale=1,
concatenable=False), then deepimagej automatically takes care of padding and unpadding in Fiji. That solves that problem then! |
Awesome, just one more small note...
If your model supports tiling (along that axis) please set |
Previously, there was a model_creation example notebook. It is also referenced in the Bioimagezoo developers guide as being here.
If not, is there another such example of how to create a model from torch?
The text was updated successfully, but these errors were encountered: