forked from Project-MONAI/tutorials
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fixes # . ### Description Maisi readme. ### Checks <!--- Put an `x` in all the boxes that apply, and remove the not applicable items --> - [ ] Avoid including large-size files in the PR. - [ ] Clean up long text outputs from code cells in the notebook. - [ ] For security purposes, please check the contents and remove any sensitive info such as user names and private key. - [ ] Ensure (1) hyperlinks and markdown anchors are working (2) use relative paths for tutorial repo files (3) put figure and graphs in the `./figure` folder - [ ] Notebook runs automatically `./runner.sh -t <path to .ipynb file>` --------- Signed-off-by: Can Zhao <canz@nvidia.com> Signed-off-by: Pengfei Guo <pengfeig@nvidia.com> Signed-off-by: Can-Zhao <volcanofly@gmail.com> Signed-off-by: Pengfei Guo <32000655+guopengf@users.noreply.github.com> Signed-off-by: dongyang0122 <don.yang.mech@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Pengfei Guo <32000655+guopengf@users.noreply.github.com> Co-authored-by: Pengfei Guo <pengfeig@nvidia.com> Co-authored-by: Mingxin Zheng <18563433+mingxin-zheng@users.noreply.github.com> Co-authored-by: YunLiu <55491388+KumoLiu@users.noreply.github.com> Co-authored-by: dongyang0122 <don.yang.mech@gmail.com>
- Loading branch information
1 parent
2ceaab5
commit f444983
Showing
7 changed files
with
267 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
NVIDIA License | ||
|
||
1. Definitions | ||
|
||
“Licensor” means any person or entity that distributes its Work. | ||
“Work” means (a) the original work of authorship made available under this license, which may include software, documentation, or other files, and (b) any additions to or derivative works thereof that are made available under this license. | ||
The terms “reproduce,” “reproduction,” “derivative works,” and “distribution” have the meaning as provided under U.S. copyright law; provided, however, that for the purposes of this license, derivative works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work. | ||
Works are “made available” under this license by including in or with the Work either (a) a copyright notice referencing the applicability of this license to the Work, or (b) a copy of this license. | ||
|
||
2. License Grant | ||
|
||
2.1 Copyright Grant. Subject to the terms and conditions of this license, each Licensor grants to you a perpetual, worldwide, non-exclusive, royalty-free, copyright license to use, reproduce, prepare derivative works of, publicly display, publicly perform, sublicense and distribute its Work and any resulting derivative works in any form. | ||
|
||
3. Limitations | ||
|
||
3.1 Redistribution. You may reproduce or distribute the Work only if (a) you do so under this license, (b) you include a complete copy of this license with your distribution, and (c) you retain without modification any copyright, patent, trademark, or attribution notices that are present in the Work. | ||
|
||
3.2 Derivative Works. You may specify that additional or different terms apply to the use, reproduction, and distribution of your derivative works of the Work (“Your Terms”) only if (a) Your Terms provide that the use limitation in Section 3.3 applies to your derivative works, and (b) you identify the specific derivative works that are subject to Your Terms. Notwithstanding Your Terms, this license (including the redistribution requirements in Section 3.1) will continue to apply to the Work itself. | ||
|
||
3.3 Use Limitation. The Work and any derivative works thereof only may be used or intended for use non-commercially. Notwithstanding the foregoing, NVIDIA Corporation and its affiliates may use the Work and any derivative works commercially. As used herein, “non-commercially” means for research or evaluation purposes only. | ||
|
||
3.4 Patent Claims. If you bring or threaten to bring a patent claim against any Licensor (including any claim, cross-claim or counterclaim in a lawsuit) to enforce any patents that you allege are infringed by any Work, then your rights under this license from such Licensor (including the grant in Section 2.1) will terminate immediately. | ||
|
||
3.5 Trademarks. This license does not grant any rights to use any Licensor’s or its affiliates’ names, logos, or trademarks, except as necessary to reproduce the notices described in this license. | ||
|
||
3.6 Termination. If you violate any term of this license, then your rights under this license (including the grant in Section 2.1) will terminate immediately. | ||
|
||
4. Disclaimer of Warranty. | ||
|
||
THE WORK IS PROVIDED “AS IS” WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF | ||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER THIS LICENSE. | ||
|
||
5. Limitation of Liability. | ||
|
||
EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK (INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION, LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,90 @@ | ||
# Medical AI for Synthetic Imaging (MAISI) | ||
This example demonstrates the applications of training and validating NVIDIA MAISI, a 3D Latent Diffusion Model (LDM) capable of generating large CT images accompanied by corresponding segmentation masks. It supports variable volume size and voxel spacing and allows for the precise control of organ/tumor size. | ||
|
||
## MAISI Model Highlight | ||
- A Foundation Variational Auto-Encoder (VAE) model for latent feature compression that works for both CT and MRI with flexible volume size and voxel size | ||
- A Foundation Diffusion model that can generate large CT volumes up to 512 × 512 × 768 size, with flexible volume size and voxel size | ||
- A ControlNet to generate image/mask pairs that can improve downstream tasks, with controllable organ/tumor size | ||
|
||
## Example Results and Evaluation | ||
|
||
## MAISI Model Workflow | ||
The training and inference workflows of MAISI are depicted in the figure below. It begins by training an autoencoder in pixel space to encode images into latent features. Following that, it trains a diffusion model in the latent space to denoise the noisy latent features. During inference, it first generates latent features from random noise by applying multiple denoising steps using the trained diffusion model. Finally, it decodes the denoised latent features into images using the trained autoencoder. | ||
<p align="center"> | ||
<img src="./figures/maisi_train.jpg" alt="MAISI training scheme"> | ||
<br> | ||
<em>Figure 1: MAISI training scheme</em> | ||
</p> | ||
|
||
<p align="center"> | ||
<img src="./figures/maisi_infer.jpg" alt="MAISI inference scheme") | ||
<br> | ||
<em>Figure 2: MAISI inference scheme</em> | ||
</p> | ||
MAISI is based on the following papers: | ||
|
||
[**Latent Diffusion:** Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." CVPR 2022.](https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf) | ||
|
||
[**ControlNet:** Lvmin Zhang, Anyi Rao, Maneesh Agrawala; “Adding Conditional Control to Text-to-Image Diffusion Models.” ICCV 2023.](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhang_Adding_Conditional_Control_to_Text-to-Image_Diffusion_Models_ICCV_2023_paper.pdf) | ||
|
||
### 1. Installation | ||
Please refer to the [Installation of MONAI Generative Model](../README.md). | ||
|
||
Note: MAISI depends on [xFormers](https://github.com/facebookresearch/xformers) library. | ||
ARM64 users can build xFormers from the [source](https://github.com/facebookresearch/xformers?tab=readme-ov-file#installing-xformers) if the available wheel does not meet their requirements. | ||
|
||
### 2. Model inference and example outputs | ||
Please refer to [maisi_inference_tutorial.ipynb](maisi_inference_tutorial.ipynb) for the tutorial for MAISI model inference. | ||
|
||
### 3. Training example | ||
Training data preparation can be found in [./data/README.md](./data/README.md) | ||
|
||
#### [3.1 3D Autoencoder Training](./maisi_train_vae_tutorial.ipynb) | ||
|
||
Please refer to [maisi_train_vae_tutorial.ipynb](maisi_train_vae_tutorial.ipynb) for the tutorial for MAISI VAE model training. | ||
|
||
#### [3.2 3D Latent Diffusion Training](./scripts/diff_model_train.py) | ||
|
||
Please refer to [maisi_diff_unet_training_tutorial.ipynb](maisi_diff_unet_training_tutorial.ipynb) for the tutorial for MAISI diffusion model training. | ||
|
||
#### [3.3 3D ControlNet Training](./scripts/train_controlnet.py) | ||
|
||
We provide a [training config](./configs/config_maisi_controlnet_train.json) executing finetuning for pretrained ControlNet with a new class (i.e., Kidney Tumor). | ||
When finetuning with other new class names, please update the `weighted_loss_label` in training config | ||
and [label_dict.json](./configs/label_dict.json) accordingly. There are 8 dummy labels as deletable placeholders in default `label_dict.json` that can be used for finetuning. Users may apply any placeholder labels for fine-tuning purpose. If there are more than 8 new labels needed in finetuning, users can freely define numeric label indices less than 256. The current ControlNet implementation can support up to 256 labels (0~255). | ||
Preprocessed dataset for ControlNet training and more details anout data preparation can be found in the [README](./data/README.md). | ||
|
||
#### Training Configuration | ||
The training was performed with the following: | ||
- GPU: at least 60GB GPU memory for 512 × 512 × 512 volume | ||
- Actual Model Input (the size of 3D image feature in latent space) for the latent diffusion model: 128 × 128 × 128 for 512 × 512 × 512 volume | ||
- AMP: True | ||
|
||
#### Execute Training: | ||
To train with a single GPU, please run: | ||
```bash | ||
python -m scripts.train_controlnet -c ./configs/config_maisi.json -t ./configs/config_maisi_controlnet_train.json -e ./configs/environment_maisi_controlnet_train.json -g 1 | ||
``` | ||
|
||
The training script also enables multi-GPU training. For instance, if you are using eight GPUs, you can run the training script with the following command: | ||
```bash | ||
export NUM_GPUS_PER_NODE=8 | ||
torchrun \ | ||
--nproc_per_node=${NUM_GPUS_PER_NODE} \ | ||
--nnodes=1 \ | ||
--master_addr=localhost --master_port=1234 \ | ||
-m scripts.train_controlnet -c ./configs/config_maisi.json -t ./configs/config_maisi_controlnet_train.json -e ./configs/environment_maisi_controlnet_train.json -g ${NUM_GPUS_PER_NODE} | ||
``` | ||
Please also check [maisi_train_controlnet_tutorial.ipynb](./maisi_train_controlnet_tutorial.ipynb) for more details about data preparation and training parameters. | ||
|
||
### 4. License | ||
|
||
The code is released under Apache 2.0 License. | ||
|
||
The model weight is released under [NSCLv1 License](./LICENSE.weights). | ||
|
||
### 5. Questions and Bugs | ||
|
||
- For questions relating to the use of MONAI, please use our [Discussions tab](https://github.com/Project-MONAI/MONAI/discussions) on the main repository of MONAI. | ||
- For bugs relating to MONAI functionality, please create an issue on the [main repository](https://github.com/Project-MONAI/MONAI/issues). | ||
- For bugs relating to the running of a tutorial, please create an issue in [this repository](https://github.com/Project-MONAI/Tutorials/issues). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,138 @@ | ||
# Medical AI for Synthetic Imaging (MAISI) Data Preparation | ||
|
||
Disclaimer: We are not the hosts of the data. Please make sure to read the requirements and usage policies of the data and give credit to the authors of the datasets! | ||
|
||
### 1 VAE training Data | ||
For the released Foundation autoencoder model weights in MAISI, we used 37243 CT training data and 1963 CT validation data from chest, abdomen, head and neck region; and 17887 MRI training data and 940 MRI validation data from brain, skull-stripped brain, chest, and below-abdomen region. The training data come from [TCIA Covid 19 Chest CT](https://wiki.cancerimagingarchive.net/display/Public/CT+Images+in+COVID-19#70227107b92475d33ae7421a9b9c426f5bb7d5b3), [TCIA Colon Abdomen CT](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=3539213), [MSD03 Liver Abdomen CT](http://medicaldecathlon.com/), [LIDC chest CT](https://www.cancerimagingarchive.net/collection/lidc-idri/), [TCIA Stony Brook Covid Chest CT](https://www.cancerimagingarchive.net/collection/covid-19-ny-sbu/), [NLST Chest CT](https://www.cancerimagingarchive.net/collection/nlst/), [TCIA Upenn GBM Brain MR](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70225642), [Aomic Brain MR](https://openneuro.org/datasets/ds003097/versions/1.2.1), [QTIM Brain MR](https://openneuro.org/datasets/ds004169/versions/1.0.7), [TCIA Acrin Chest MR](https://www.cancerimagingarchive.net/collection/acrin-contralateral-breast-mr/), [TCIA Prostate MR Below-Abdomen MR](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=68550661#68550661a2c52df5969d435eae49b9669bea21a6). | ||
|
||
In total, we included: | ||
| Index | Dataset Name | Number of Training Data | Number of Validation Data | | ||
|-------|------------------------------------------------|-------------------------|---------------------------| | ||
| 1 | Covid 19 Chest CT | 722 | 49 | | ||
| 2 | TCIA Colon Abdomen CT | 1522 | 77 | | ||
| 3 | MSD03 Liver Abdomen CT | 104 | 0 | | ||
| 4 | LIDC chest CT | 450 | 24 | | ||
| 5 | TCIA Stony Brook Covid Chest CT | 2644 | 139 | | ||
| 6 | NLST Chest CT | 31801 | 1674 | | ||
| 7 | TCIA Upenn GBM Brain MR (skull-stripped) | 2550 | 134 | | ||
| 8 | Aomic Brain MR | 2630 | 138 | | ||
| 9 | QTIM Brain MR | 1275 | 67 | | ||
| 10 | Acrin Chest MR | 6599 | 347 | | ||
| 11 | TCIA Prostate MR Below-Abdomen MR | 928 | 49 | | ||
| 12 | Aomic Brain MR, skull-stripped | 2630 | 138 | | ||
| 13 | QTIM Brain MR, skull-stripped | 1275 | 67 | | ||
| | Total CT | 37243 | 1963 | | ||
| | Total MRI | 17887 | 940 | | ||
|
||
|
||
### 2 Diffusion model training Data | ||
|
||
The training dataset for the Diffusion model used in MAISI comprises 10,277 CT volumes from 24 distinct datasets, encompassing various body regions and disease patterns. | ||
|
||
The table below provides a summary of the number of volumes for each dataset. | ||
|
||
|Index| Dataset name|Number of volumes| | ||
|:-----|:-----|:-----| | ||
1 | AbdomenCT-1K | 789 | ||
2 | AeroPath | 15 | ||
3 | AMOS22 | 240 | ||
4 | autoPET23 | 200 | ||
5 | Bone-Lesion | 223 | ||
6 | BTCV | 48 | ||
7 | COVID-19 | 524 | ||
8 | CRLM-CT | 158 | ||
9 | CT-ORG | 94 | ||
10 | CTPelvic1K-CLINIC | 94 | ||
11 | LIDC | 422 | ||
12 | MSD Task03 | 88 | ||
13 | MSD Task06 | 50 | ||
14 | MSD Task07 | 224 | ||
15 | MSD Task08 | 235 | ||
16 | MSD Task09 | 33 | ||
17 | MSD Task10 | 87 | ||
18 | Multi-organ-Abdominal-CT | 65 | ||
19 | NLST | 3109 | ||
20 | Pancreas-CT | 51 | ||
21 | StonyBrook-CT | 1258 | ||
22 | TCIA_Colon | 1437 | ||
23 | TotalSegmentatorV2 | 654 | ||
24 | VerSe | 179 | ||
|
||
### 3 ControlNet model training Data | ||
|
||
#### 3.1 Example preprocessed dataset | ||
|
||
We provide the preprocessed subset of [C4KC-KiTS](https://www.cancerimagingarchive.net/collection/c4kc-kits/) dataset used in the finetuning config `environment_maisi_controlnet_train.json`. The dataset and corresponding JSON data list can be downloaded from [this link](https://drive.google.com/drive/folders/1iMStdYxcl26dEXgJEXOjkWvx-I2fYZ2u?usp=sharing) and should be saved in `maisi/dataset/` folder. | ||
|
||
The structure of example folder in the preprocessed dataset is: | ||
|
||
``` | ||
|-*arterial*.nii.gz # original image | ||
|-*arterial_emb*.nii.gz # encoded image embedding | ||
KiTS-000* --|-mask*.nii.gz # original labels | ||
|-mask_pseudo_label*.nii.gz # pseudo labels | ||
|-mask_combined_label*.nii.gz # combined mask of original and pseudo labels | ||
``` | ||
|
||
An example combined mask of original and pseudo labels is shown below: | ||
![example_combined_mask](../figures/example_combined_mask.png) | ||
|
||
Please note that the label of Kidney Tumor is mapped to index `129` in this preprocessed dataset. The encoded image embedding is generated by provided `Autoencoder` in `./models/autoencoder_epoch273.pt` during preprocessing to save memory usage for training. The pseudo labels are generated by [VISTA 3D](https://github.com/Project-MONAI/VISTA). In addition, the dimension of each volume and corresponding pseudo label is resampled to the closest multiple of 128 (e.g., 128, 256, 384, 512, ...). | ||
|
||
The training workflow requires one JSON file to specify the image embedding and segmentation pairs. The example file is located in the `maisi/dataset/C4KC-KiTS_subset.json`. | ||
|
||
The JSON file has the following structure: | ||
```python | ||
{ | ||
"training": [ | ||
{ | ||
"image": "*/*arterial_emb*.nii.gz", # relative path to the image embedding file | ||
"label": "*/mask_combined_label*.nii.gz", # relative path to the combined label file | ||
"dim": [512, 512, 512], # the dimension of image | ||
"spacing": [1.0, 1.0, 1.0], # the spacing of image | ||
"top_region_index": [0, 1, 0, 0], # the top region index of the image | ||
"bottom_region_index": [0, 0, 0, 1], # the bottom region index of the image | ||
"fold": 0 # fold index for cross validation, fold 0 is used for training | ||
}, | ||
|
||
... | ||
] | ||
} | ||
``` | ||
|
||
#### 3.2 Controlnet full training datasets | ||
The ControlNet training dataset used in MAISI contains 6330 CT volumes (5058 and 1272 volumes are used for training and validation, respectively) across 20 datasets and covers different body regions and diseases. | ||
|
||
The table below summarizes the number of volumes for each dataset. | ||
|
||
|Index| Dataset name|Number of volumes| | ||
|:-----|:-----|:-----| | ||
1 | AbdomenCT-1K | 789 | ||
2 | AeroPath | 15 | ||
3 | AMOS22 | 240 | ||
4 | Bone-Lesion | 237 | ||
5 | BTCV | 48 | ||
6 | CT-ORG | 94 | ||
7 | CTPelvic1K-CLINIC | 94 | ||
8 | LIDC | 422 | ||
9 | MSD Task03 | 105 | ||
10 | MSD Task06 | 50 | ||
11 | MSD Task07 | 225 | ||
12 | MSD Task08 | 235 | ||
13 | MSD Task09 | 33 | ||
14 | MSD Task10 | 101 | ||
15 | Multi-organ-Abdominal-CT | 64 | ||
16 | Pancreas-CT | 51 | ||
17 | StonyBrook-CT | 1258 | ||
18 | TCIA_Colon | 1436 | ||
19 | TotalSegmentatorV2 | 654 | ||
20| VerSe | 179 | ||
|
||
### 4. Questions and bugs | ||
|
||
- For questions relating to the use of MONAI, please use our [Discussions tab](https://github.com/Project-MONAI/MONAI/discussions) on the main repository of MONAI. | ||
- For bugs relating to MONAI functionality, please create an issue on the [main repository](https://github.com/Project-MONAI/MONAI/issues). | ||
- For bugs relating to the running of a tutorial, please create an issue in [this repository](https://github.com/Project-MONAI/Tutorials/issues). | ||
|
||
### Reference | ||
[1] [Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." CVPR 2022.](https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf) |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters