Skip to content

Commit

Permalink
Adding VQVAE/VQGAN Tutorials (Project-MONAI#1817)
Browse files Browse the repository at this point in the history
Address some of Project-MONAI#1769.

### Description
This adds a few more tutorials from the GenerativeModels Repo

### Checks
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [ ] Avoid including large-size files in the PR.
- [ ] Clean up long text outputs from code cells in the notebook.
- [ ] For security purposes, please check the contents and remove any
sensitive info such as user names and private key.
- [ ] Ensure (1) hyperlinks and markdown anchors are working (2) use
relative paths for tutorial repo files (3) put figure and graphs in the
`./figure` folder
- [ ] Notebook runs automatically `./runner.sh -t <path to .ipynb file>`

---------

Signed-off-by: Eric Kerfoot <eric.kerfoot@kcl.ac.uk>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
ericspod and pre-commit-ci[bot] authored Sep 6, 2024
1 parent b5766e5 commit 9491b4b
Show file tree
Hide file tree
Showing 5 changed files with 3,056 additions and 0 deletions.
715 changes: 715 additions & 0 deletions generation/2d_vqgan/2d_vqgan_tutorial.ipynb

Large diffs are not rendered by default.

642 changes: 642 additions & 0 deletions generation/2d_vqvae/2d_vqvae_tutorial.ipynb

Large diffs are not rendered by default.

1,043 changes: 1,043 additions & 0 deletions generation/2d_vqvae_transformer/2d_vqvae_transformer_tutorial.ipynb

Large diffs are not rendered by default.

635 changes: 635 additions & 0 deletions generation/3d_vqvae/3d_vqvae_tutorial.ipynb

Large diffs are not rendered by default.

21 changes: 21 additions & 0 deletions generation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,3 +39,24 @@ Example shows the use cases of how to use MONAI for 2D segmentation of images us

## [Evaluate Realism and Diversity of the generated images](./realism_diversity_metrics/realism_diversity_metrics.ipynb)
Example shows the use cases of using MONAI to evaluate the performance of a generative model by computing metrics such as Frechet Inception Distance (FID) and Maximum Mean Discrepancy (MMD) for assessing realism, as well as MS-SSIM and SSIM for evaluating image diversity.

## [Training a 2D VQ-VAE + Autoregressive Transformers](./2d_vqvae_transformer/2d_vqvae_transformer_tutorial.ipynb):
Example shows how to train a Vector-Quantized Variation Autoencoder + Transformers on the MedNIST dataset.

## Training VQ-VAEs and VQ-GANs: [2D VAE](./2d_vqvae/2d_vqvae_tutorial.ipynb), [3D VAE](./3d_vqvae/3d_vqvae_tutorial.ipynb) and [2D GAN](./3d_autoencoderkl/2d_vqgan_tutorial.ipynb)
Examples show how to train Vector Quantized Variation Autoencoder on [2D](./2d_vqvae/2d_vqvae_tutorial.ipynb) and [3D](./3d_vqvae/3d_vqvae_tutorial.ipynb), and how to use the PatchDiscriminator class to train a [VQ-GAN](./2d_vqgan/2d_vqgan_tutorial.ipynb) and improve the quality of the generated images.

## [Training a 2D Denoising Diffusion Probabilistic Model](./2d_ddpm/2d_ddpm_tutorial.ipynb):
Example shows how to easily train a DDPM on medical data (MedNIST).

## [Comparing different noise schedulers](./2d_ddpm/2d_ddpm_compare_schedulers.ipynb):
Example compares the performance of different noise schedulers. This shows how to sample a diffusion model using the DDPM, DDIM, and PNDM schedulers and how different numbers of timesteps affect the quality of the samples.

## [Training a 2D Denoising Diffusion Probabilistic Model with different parameterisation](./2d_ddpm/2d_ddpm_tutorial_v_prediction.ipynb):
Example shows how to train a DDPM using the v-prediction parameterization, which improves the stability and convergence of the model. MONAI supports different parameterizations for the diffusion model (epsilon, sample, and v-prediction).

## [Training a 2D DDPM using Pytorch Ignite](./2d_ddpm/2d_ddpm_compare_schedulers.ipynb):
Example shows how to train a DDPM on medical data using Pytorch Ignite. This shows how to use the DiffusionPrepareBatch to prepare the model inputs and MONAI's SupervisedTrainer and SupervisedEvaluator to train DDPMs.

## [Using a 2D DDPM to inpaint images](./2d_ddpm/2d_ddpm_inpainting.ipynb):
Example shows how to use a DDPM to inpaint of 2D images from the MedNIST dataset using the RePaint method.

0 comments on commit 9491b4b

Please sign in to comment.