Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a build model_creation example? #445

Open
davidackerman opened this issue Dec 19, 2024 · 15 comments
Open

Is there a build model_creation example? #445

davidackerman opened this issue Dec 19, 2024 · 15 comments

Comments

@davidackerman
Copy link

Previously, there was a model_creation example notebook. It is also referenced in the Bioimagezoo developers guide as being here.

If not, is there another such example of how to create a model from torch?

@FynnBe
Copy link
Member

FynnBe commented Dec 20, 2024

Hi, thanks for reporting the broken link!
Sorry for the inconvenience.

I updated the example links in the bioimageio.core docs here and created bioimage-io/bioimage.io#439 to start updating our central docs.

The model_creation notebook was revised and republished as model_usage.ipynb (as now listed under the examples)

That notebook starts off with loading an existing resource from the zoo to get you started faster.
I once presented how to create a model completely from scratch: notebook slides, but this notebook does not (yet) sufficient comments to really serve as a standalone example (maybe you still find it a useful reference). I plan to make a proper example out of it eventually (#447)

@mattaq31
Copy link

Hi, sorry to butt in here, but I'm also attempting to create a Bioimage.io model from a pytorch model and I honestly had no idea where to start before I saw the presentation you've linked.

So just to confirm if I'm reading your presentation correctly - do I need 1) the model source code to load in the model architecture (via ArchitectureFromFileDescr) and 2) the model weights in Torchscript format? I also saw that you saved your weights to .pt format but it seems like it was just for demonstration purposes.

Thanks in advance!

@FynnBe
Copy link
Member

FynnBe commented Dec 23, 2024

Hi @mattaq31 , thank you for your question!
This is great feedback helping to improve our documentation and examples!
under weights there are a few weight formats to choose from: https://bioimage-io.github.io/core-bioimage-io-python/bioimageio/spec/model/v0_5.html#WeightsDescr

We encourage contributors to provide as many as possible, however a single weights format is the minimum. When possible its great to convert pytorch state dicht weights (which require executing the source code for the architecture as I'm sure you're familiar with) to torchscript (which does not require the original source code and specific Python environment making it far easier to deploy).
To continue training or adapt a model in other ways the pytorch state dict weights are easier to work with, so they should still be included even if torchscript weights are available.
We accept scripted and traced torchscript models.

@mattaq31
Copy link

mattaq31 commented Dec 23, 2024

Hi @FynnBe thanks for getting back to me so quickly during the holiday period! Also thanks for clarifying the different options available when choosing which weights to upload.

Eventually, I managed to assemble a bioimage.io model for my torchscript weights and proceeded to run the validation script which ran with no errors. For more info, I am preparing a U-Net segmentation model which was trained within PyTorch that accepts single channel images and outputs dual channel images of the same size.

However, I have a couple more questions (apologies in advance for this large dump of text!):

  • The validation script is directly testing the model with my provided test input/output arrays, which is fine. However, my ultimate goal is to release my segmentation model for use with DeepImageJ. Are the torchscript weights I provided enough? Is there some way to test whether the model will work within DeepImageJ?
  • I have a certain set of pre/post processing requirements:
    • Input images can be either 8 or 16-bit. The 8-bit ones should be normalized by dividing by 255, while the 16-bit ones should be scaled to 0-1 using percentile normalization. I can embed this normalization into my torchscript module, but I won't be able to recognize whether an image is 8- or 16-bit if the input is a tensor. Does a BioImage.IO preprocessing module exist which can take care of these requirements?
    • All input images to the model should have only one channel. RGB inputs need to be converted to grayscale. Does BioImage.IO have a module for this? Alternatively, I can embed the channel squishing within the torchscript module.
    • I also need to pad my images to be divisible by 32. From what I can see, it seems this pre/post processing is not available in BioImage.IO. Is this correct? If so, would you recommend I embed the padding within the torchscript module itself?
  • In general, are models in the BioImage.IO zoo setup to work 'out-of-the-box' when an input image is provided? The above requirements are non-trivial and when I previously implemented my torchscript model in QuPath, I preferred to keep the model simple and deal with all the pre/post processing myself within QuPath's java framework. Would you recommend I do the same with DeepImageJ?

In general, the documentation available for preparing a new BioImage.IO model from scratch seems to be very terse, I had to guess many of the arguments to the ModelDescr function based on your presentation example and the brief tutorial here. For example, a question I had was: 'which preprocessing operations are available and what exactly do they do?' and all I could find was a brief mention here with no details.

Thanks again for your help and apologies once again for the large wall of text but I wasn't sure where to reach out otherwise!

@mattaq31
Copy link

Hi again @FynnBe, to quickly update, I've managed to validate the model works in DeepImageJ (although that also has its own share of bugs and issues). I'm about to upload the model to BioImage.IO but I am unable to login to the uploader form system (https://bioimageio-uploader.netlify.app/). When I try going ahead and uploading anyway, I'm getting various errors such as:

TypeError: Cannot convert undefined or null to object

Would you be able to let me know if there are any workarounds for this issue?

Thanks!

@alanocallaghan
Copy link

Input images can be either 8 or 16-bit. The 8-bit ones should be normalized by dividing by 255, while the 16-bit ones should be scaled to 0-1 using percentile normalization. I can embed this normalization into my torchscript module, but I won't be able to recognize whether an image is 8- or 16-bit if the input is a tensor. Does a BioImage.IO preprocessing module exist which can take care of these requirements?

You can specify preprocessing, eg: https://github.com/instanseg/instanseg/blob/ebe5ef4608ef026b7e0f4a39f8a1e68667582058/instanseg/utils/create_bioimageio_model.py#L273C1-L273C156
although I don't know if you'll be able to handle the preprocessing differently for 8 and 16 bit images

@FynnBe
Copy link
Member

FynnBe commented Jan 7, 2025

RE bioimage.io login, I see you created bioimage-io/bioimage.io#440
I forwarded the issue internally to draw more attention to this issue to have this fixed asap.

Input images can be either 8 or 16-bit. The 8-bit ones should be normalized by dividing by 255, while the 16-bit ones should be scaled to 0-1 using percentile normalization. I can embed this normalization into my torchscript module, but I won't be able to recognize whether an image is 8- or 16-bit if the input is a tensor. Does a BioImage.IO preprocessing module exist which can take care of these requirements?

No, but I'd suggest to use percentile normalization for all inputs (with additional clipping if you cannot have values outside [0,1]).
Someone else's 8bit images might not use the full range, etc. percentile normalization should work for your 8 bit images as well, right?

In general, are models in the BioImage.IO zoo setup to work 'out-of-the-box' when an input image is provided? The above requirements are non-trivial and when I previously implemented my torchscript model in QuPath, I preferred to keep the model simple and deal with all the pre/post processing myself within QuPath's java framework. Would you recommend I do the same with DeepImageJ?

One of our goals is inter-operability. The more custom code you need around your model to make it useful the fewer tools (e.g. only QuPath or only DeepImageJ) can make sense of it.
We are working on mechanisms to support such custom worklfows in general, but it should always be evaluated if the standard way that works 'out-of-the-box' is sufficient, because whenever it is many tools can deploy a model inreasing its impact.

In general, the documentation available for preparing a new BioImage.IO model from scratch seems to be very terse, [...]

Thank you for taking a look and giving your valuable feedback! I'll keep working on libraries themselves and their documentation.
Have you seen https://bioimage-io.github.io/core-bioimage-io-python/bioimageio/core.html, in particular https://bioimage-io.github.io/core-bioimage-io-python/bioimageio/spec.html#ModelDescr ?
In those developer docs I try to give all the details needed to work with bioimageio.spec and bioimageio.core in code.
There you also find details about the available preprocessing for example: https://bioimage-io.github.io/core-bioimage-io-python/bioimageio/spec/model/v0_5.html#InputTensorDescr.preprocessing .

@mattaq31
Copy link

mattaq31 commented Jan 7, 2025

Great, thanks for highlighting the login issue with the team!

Re. the pre-processing, I'm adhering to the spec used for our model preprint experiments for now but yes percentile normalization of all image types works too. Future iterations of our model should be able to simplify the preprocessing/postprocessing further.

I agree with the principle you've outlined for BioImage.IO models in general, but in this case without the facility to channel-squish, pad and unpad images I will still have to surround this particular model with custom code for pre and post-processing. The model I'm working with here is a first-generation type system and in this particular case I'm going to leave it as-is to ensure uniformity across our releases.

Re. documenting the library - I appreciate that the documentation is available (I did check out those links when I was assembling my own model) but the way it's written was difficult for me to interpret as someone who isn't that familiar with the syntax presented. For example, to create a preprocessing module, your presentation example is set as follows:

  preprocessing = [ScaleRangeDescr(
          kwargs=ScaleRangeKwargs(
              axes= (AxisId('y'), AxisId('x')),
              max_percentile= 99.8,
              min_percentile= 5.0,
          )
      )] 

This means that I would have to realize that A) a list is needed and B) the preprocessing module has three separate classes that need to be imported (ScaleRangeDescr, ScaleRangeKwargs, AxisId) for which the documentation on each is split across different parts of the page and no examples were available. Perhaps some form of 'how to read the docs' guide would be able to simplify understanding things, but I would have much preferred a pytorch-style system where most functions/classes have a mathematical description as well as examples all contained on the same page (e.g. https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html#torch.nn.Conv2d).

Others might be better than me at reading the docs but as a first-time user I was pretty confused :)

Thanks again for your help here!

@FynnBe
Copy link
Member

FynnBe commented Jan 8, 2025

... pad and unpad images

If you specify a halo for the output tensor (and specify for your spatial/temporal input axes that they are concatenable) then bioimageio.core can pad, unpad and stitch for you.
Call bioimageio.core.predict(..., blocksize_parameter=10) with blocksize_parameter > 0 and make sure your spatial/temporal input axes are parameterized (https://bioimage-io.github.io/core-bioimage-io-python/bioimageio/spec/model/v0_5.html#ParameterizedSize)

Re. documenting the library...

Thank you for sharing! Inspired by your feedback I decided to add some examples to the docs (#681). I hope that will help get people started coding their own descriptions more easily.
The presentation indeed needs context to really be helpful on its own. I will turn it into a standalone example.

@mattaq31
Copy link

mattaq31 commented Jan 8, 2025

Looks like the v0.5 version of OutputTensorDescr doesn't take a halo though, right? Link here.

Also would the blocksize_parameter be taken care of by deepimagej automatically or is this only something that can be specified in python's bioimagio module?

@alanocallaghan
Copy link

Halo is now part of the axis specification rather than the tensor specification, eg TimeOutputAxisWithHalo

@FynnBe
Copy link
Member

FynnBe commented Jan 8, 2025

Also would the blocksize_parameter be taken care of by deepimagej automatically or is this only something that can be specified in python's bioimagio module?

Unfortunately blocksize_parameter is only a paramter to the bioimageio.core Python package's predict function and (to my knowledge) not available in deepimagej. They may however use the halo information that is part of any model description to provide tiling as well. Maybe @carlosuc3m could chime in on this. Otherwise I'd suggest to open a deepimagej issue or an image.sc forum post about that.

@mattaq31
Copy link

Thanks both for the details on the halo syntax. Given that it's not currently accessible to deepimagej, I'll probably circumvent this issue by building in the padding/unpadding into the torchscript module itself.

I think this could still be a nice thing to include for deepimagej in the future though!

@mattaq31
Copy link

Actually, it turns out that deepimagej is indeed somehow taking care of padding and unpadding by itself! If I use the below for my inputs:

  SpaceInputAxis(
      id=AxisId('x'),
      size=ParameterizedSize(min=32, step=32),
      scale=1,
      concatenable=False),

then deepimagej automatically takes care of padding and unpadding in Fiji. That solves that problem then!

@FynnBe
Copy link
Member

FynnBe commented Jan 10, 2025

Awesome, just one more small note...

concatenable=False

If your model supports tiling (along that axis) please set concatenable=True to explicitly indicate that this input can be processed in blocks (along that axis) (doccs). This makes it easier to (programmatically) aid model users in choosing correctly shaped inputs. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants