Skip to content

Commit

Permalink
Update google link to use shared drive (Project-MONAI#1819)
Browse files Browse the repository at this point in the history
Update google link to use shared drive

### Checks
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [ ] Avoid including large-size files in the PR.
- [ ] Clean up long text outputs from code cells in the notebook.
- [ ] For security purposes, please check the contents and remove any
sensitive info such as user names and private key.
- [ ] Ensure (1) hyperlinks and markdown anchors are working (2) use
relative paths for tutorial repo files (3) put figure and graphs in the
`./figure` folder
- [ ] Notebook runs automatically `./runner.sh -t <path to .ipynb file>`

---------

Signed-off-by: YunLiu <55491388+KumoLiu@users.noreply.github.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
KumoLiu and pre-commit-ci[bot] authored Sep 7, 2024
1 parent f85cef6 commit 3c891ec
Show file tree
Hide file tree
Showing 25 changed files with 51 additions and 45 deletions.
2 changes: 1 addition & 1 deletion 3d_classification/densenet_training_array.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@
],
"source": [
"if not os.path.isfile(images[0]):\n",
" resource = \"https://drive.google.com/file/d/1f5odq9smadgeJmDeyEy_UOjEtE_pkKc0/view?usp=sharing\"\n",
" resource = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/IXI-T1.tar\"\n",
" md5 = \"34901a0593b41dd19c1a1f746eac2d58\"\n",
"\n",
" dataset_dir = os.path.join(root_dir, \"ixi\")\n",
Expand Down
2 changes: 1 addition & 1 deletion 3d_regression/densenet_training_array.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@
"outputs": [],
"source": [
"if not os.path.isfile(images[0]):\n",
" resource = \"https://drive.google.com/file/d/1f5odq9smadgeJmDeyEy_UOjEtE_pkKc0/view?usp=sharing\"\n",
" resource = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/IXI-T1.tar\"\n",
" md5 = \"34901a0593b41dd19c1a1f746eac2d58\"\n",
"\n",
" dataset_dir = os.path.join(root_dir, \"ixi\")\n",
Expand Down
4 changes: 2 additions & 2 deletions 3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
"\n",
"https://www.synapse.org/#!Synapse:syn27046444/wiki/616992\n",
"\n",
"The JSON file containing training and validation sets (internal split) needs to be downloaded from this [link](https://drive.google.com/file/d/1i-BXYe-wZ8R9Vp3GXoajGyqaJ65Jybg1/view?usp=sharing) and placed in the same folder as the dataset. As discussed in the following, this tutorial uses fold 1 for training a Swin UNETR model on the BraTS 21 challenge.\n",
"The JSON file containing training and validation sets (internal split) needs to be downloaded from this [link](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/brats21_folds.json) and placed in the same folder as the dataset. As discussed in the following, this tutorial uses fold 1 for training a Swin UNETR model on the BraTS 21 challenge.\n",
"\n",
"### Tumor Characteristics\n",
"\n",
Expand Down Expand Up @@ -114,7 +114,7 @@
" \"TrainingData/BraTS2021_01146/BraTS2021_01146_flair.nii.gz\"\n",
" \n",
"\n",
"- Download the json file from this [link](https://drive.google.com/file/d/1i-BXYe-wZ8R9Vp3GXoajGyqaJ65Jybg1/view?usp=sharing) and placed in the same folder as the dataset.\n"
"- Download the json file from this [link](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/brats21_folds.json) and placed in the same folder as the dataset.\n"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion 3d_segmentation/swin_unetr_btcv_segmentation_3d.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -331,7 +331,7 @@
"outputs": [],
"source": [
"# uncomment this command to download the JSON file directly\n",
"# wget -O data/dataset_0.json 'https://drive.google.com/uc?export=download&id=1qcGh41p-rI3H_sQ0JwOAhNiQSXriQqGi'"
"# wget -O data/dataset_0.json 'https://developer.download.nvidia.com/assets/Clara/monai/tutorials/swin_unetr_btcv_dataset_0.json'"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion 3d_segmentation/vista3d/vista3d_spleen_finetune.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@
}
],
"source": [
"resource = \"https://drive.google.com/file/d/1Sbe6GjlgH-GIcXolZzUiwgqR4DBYNLQ3/view?usp=drive_link\"\n",
"resource = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_vista3d.pt\"\n",
"if not os.path.exists(os.path.join(root_dir, \"model.pt\")):\n",
" download_url(url=resource, filepath=os.path.join(root_dir, \"model.pt\"))\n",
"if os.path.exists(os.path.join(root_dir, \"model.pt\")):\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@
"## Load useful data\n",
"\n",
"As described in `readme.md`, we manually labeled 1126 frames in order to build the detection model.\n",
"Please download the manually labeled bounding boxes from [google drive](https://drive.google.com/file/d/1iO4bXTGdhRLIoxIKS6P_nNAgI_1Fp_Vg/view?usp=sharing), the uncompressed folder `labels` is saved into `label_14_tools_yolo_640_blur/`."
"Please download the manually labeled bounding boxes from [google drive](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/1126_frame_labels.zip), the uncompressed folder `labels` is saved into `label_14_tools_yolo_640_blur/`."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion deployment/ray/mednist_classifier_ray.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@
"metadata": {},
"outputs": [],
"source": [
"resource = \"https://drive.google.com/uc?id=1zKRi5FrwEES_J-AUkM7iBJwc__jy6ct6\"\n",
"resource = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/deployment/classifier.zip\"\n",
"dst = os.path.join(\"..\", \"bentoml\", \"classifier.zip\")\n",
"if not os.path.exists(dst):\n",
" download_url(resource, dst)"
Expand Down
2 changes: 1 addition & 1 deletion detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Then run the following command and go directly to Sec. 3.2.
python3 luna16_prepare_env_files.py
```

Alternatively, you can download the original data and resample them by yourself with the following steps. Users can either download 1) mhd/raw data from [LUNA16](https://luna16.grand-challenge.org/Home/) or its [copy](https://drive.google.com/drive/folders/1-enN4eNEnKmjltevKg3W2V-Aj0nriQWE?usp=share_link), or 2) DICOM data from [LIDC-IDRI](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=1966254) with [NBIA Data Retriever](https://wiki.cancerimagingarchive.net/display/NBIA/Downloading+TCIA+Images).
Alternatively, you can download the original data and resample them by yourself with the following steps. Users can either download 1) mhd/raw data from [LUNA16](https://luna16.grand-challenge.org/Home/), or 2) DICOM data from [LIDC-IDRI](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=1966254) with [NBIA Data Retriever](https://wiki.cancerimagingarchive.net/display/NBIA/Downloading+TCIA+Images).

The raw CT images in LUNA16 have various voxel sizes. The first step is to resample them to the same voxel size, which is defined in the value of "spacing" in [./config/config_train_luna16_16g.json](./config/config_train_luna16_16g.json).

Expand Down
2 changes: 1 addition & 1 deletion federated_learning/breast_density_challenge/data/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Example breast density data

Download example data from https://drive.google.com/file/d/1Fd9GLUIzbZrl4FrzI3Huzul__C8wwzyx/view?usp=sharing.
Download example data from https://developer.download.nvidia.com/assets/Clara/monai/tutorials/fl/preprocessed.zip.
Extract here.

## Data source
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ $ fx envoy start --shard-name env_two --disable-tls --envoy-config-path envoy_co
```
[13:48:42] INFO 🧿 Starting the Envoy. envoy.py:53
Downloading...
From: https://drive.google.com/uc?id=1QsnnkvZyJPcbRoV_ArW8SnE1OTuoVbKE
From: https://developer.download.nvidia.com/assets/Clara/monai/tutorials/MedNIST.tar.gz
To: /tmp/tmpd60wcnn8/MedNIST.tar.gz
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 61.8M/61.8M [00:04<00:00, 13.8MB/s]
2022-07-22 13:48:48,735 - INFO - Downloaded: MedNIST.tar.gz
Expand Down
2 changes: 1 addition & 1 deletion generation/maisi/data/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ The table below provides a summary of the number of volumes for each dataset.

#### 3.1 Example preprocessed dataset

We provide the preprocessed subset of [C4KC-KiTS](https://www.cancerimagingarchive.net/collection/c4kc-kits/) dataset used in the finetuning config `environment_maisi_controlnet_train.json`. The dataset and corresponding JSON data list can be downloaded from [this link](https://drive.google.com/drive/folders/1iMStdYxcl26dEXgJEXOjkWvx-I2fYZ2u?usp=sharing) and should be saved in `maisi/dataset/` folder.
We provide the preprocessed subset of [C4KC-KiTS](https://www.cancerimagingarchive.net/collection/c4kc-kits/) dataset used in the finetuning config `environment_maisi_controlnet_train.json`. The [dataset](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_maisi_C4KC-KiTS_subset.zip) and [corresponding JSON data](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_maisi_C4KC-KiTS_subset.json) list can be downloaded and should be saved in `maisi/dataset/` folder.

The structure of example folder in the preprocessed dataset is:

Expand Down
22 changes: 14 additions & 8 deletions generation/maisi/maisi_inference_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -157,35 +157,41 @@
"files = [\n",
" {\n",
" \"path\": \"models/autoencoder_epoch273.pt\",\n",
" \"url\": \"https://drive.google.com/file/d/1Ojw25lFO8QbHkxazdK4CgZTyp3GFNZGz/view?usp=sharing\",\n",
" \"url\": \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials\"\n",
" \"/model_zoo/model_maisi_autoencoder_epoch273_alternative.pt\",\n",
" },\n",
" {\n",
" \"path\": \"models/input_unet3d_data-all_steps1000size512ddpm_random_current_inputx_v1.pt\",\n",
" \"url\": \"https://drive.google.com/file/d/1lklNv4MTdI_9bwFRMd98QQ7JLerR5gC_/view?usp=drive_link\",\n",
" \"url\": \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo\"\n",
" \"/model_maisi_input_unet3d_data-all_steps1000size512ddpm_random_current_inputx_v1_alternative.pt\",\n",
" },\n",
" {\n",
" \"path\": \"models/controlnet-20datasets-e20wl100fold0bc_noi_dia_fsize_current.pt\",\n",
" \"url\": \"https://drive.google.com/file/d/1mLYeqeZ819_WpZPlAInhcWuCIHgn3QNT/view?usp=drive_link\",\n",
" \"url\": \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo\"\n",
" \"/model_maisi_controlnet-20datasets-e20wl100fold0bc_noi_dia_fsize_current_alternative.pt\",\n",
" },\n",
" {\n",
" \"path\": \"models/mask_generation_autoencoder.pt\",\n",
" \"url\": \"https://drive.google.com/file/d/19JnX-C6QAg4RfghTwpPnj4KEWhtawpCy/view?usp=drive_link\",\n",
" \"url\": \"https://developer.download.nvidia.com/assets/Clara/monai\" \"/tutorials/mask_generation_autoencoder.pt\",\n",
" },\n",
" {\n",
" \"path\": \"models/mask_generation_diffusion_unet.pt\",\n",
" \"url\": \"https://drive.google.com/file/d/1yOQvlhXFGY1ZYavADM3N34vgg5AEitda/view?usp=drive_link\",\n",
" \"url\": \"https://developer.download.nvidia.com/assets/Clara/monai\"\n",
" \"/tutorials/model_zoo/model_maisi_mask_generation_diffusion_unet_v2.pt\",\n",
" },\n",
" {\n",
" \"path\": \"configs/candidate_masks_flexible_size_and_spacing_3000.json\",\n",
" \"url\": \"https://drive.google.com/file/d/1yMkH-lrAsn2YUGoTuVKNMpicziUmU-1J/view?usp=sharing\",\n",
" \"url\": \"https://developer.download.nvidia.com/assets/Clara/monai\"\n",
" \"/tutorials/candidate_masks_flexible_size_and_spacing_3000.json\",\n",
" },\n",
" {\n",
" \"path\": \"configs/all_anatomy_size_condtions.json\",\n",
" \"url\": \"https://drive.google.com/file/d/1AJyt1DSoUd2x2AOQOgM7IxeSyo4MXNX0/view?usp=sharing\",\n",
" \"url\": \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/all_anatomy_size_condtions.json\",\n",
" },\n",
" {\n",
" \"path\": \"datasets/all_masks_flexible_size_and_spacing_3000.zip\",\n",
" \"url\": \"https://drive.google.com/file/d/16MKsDKkHvDyF2lEir4dzlxwex_GHStUf/view?usp=sharing\",\n",
" \"url\": \"https://developer.download.nvidia.com/assets/Clara/monai\"\n",
" \"/tutorials/model_zoo/model_maisi_all_masks_flexible_size_and_spacing_3000.zip\",\n",
" },\n",
"]\n",
"\n",
Expand Down
16 changes: 8 additions & 8 deletions generation/maisi/scripts/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,35 +76,35 @@ def main():
files = [
{
"path": "models/autoencoder_epoch273.pt",
"url": "https://drive.google.com/file/d/1Ojw25lFO8QbHkxazdK4CgZTyp3GFNZGz/view?usp=sharing",
"url": "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_maisi_autoencoder_epoch273_alternative.pt",
},
{
"path": "models/input_unet3d_data-all_steps1000size512ddpm_random_current_inputx_v1.pt",
"url": "https://drive.google.com/file/d/1lklNv4MTdI_9bwFRMd98QQ7JLerR5gC_/view?usp=drive_link",
"url": "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_maisi_input_unet3d_data-all_steps1000size512ddpm_random_current_inputx_v1_alternative.pt",
},
{
"path": "models/controlnet-20datasets-e20wl100fold0bc_noi_dia_fsize_current.pt",
"url": "https://drive.google.com/file/d/1mLYeqeZ819_WpZPlAInhcWuCIHgn3QNT/view?usp=drive_link",
"url": "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_maisi_controlnet-20datasets-e20wl100fold0bc_noi_dia_fsize_current_alternative.pt",
},
{
"path": "models/mask_generation_autoencoder.pt",
"url": "https://drive.google.com/file/d/19JnX-C6QAg4RfghTwpPnj4KEWhtawpCy/view?usp=drive_link",
"url": "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/mask_generation_autoencoder.pt",
},
{
"path": "models/mask_generation_diffusion_unet.pt",
"url": "https://drive.google.com/file/d/1yOQvlhXFGY1ZYavADM3N34vgg5AEitda/view?usp=drive_link",
"url": "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_maisi_mask_generation_diffusion_unet_v2.pt",
},
{
"path": "configs/candidate_masks_flexible_size_and_spacing_3000.json",
"url": "https://drive.google.com/file/d/1yMkH-lrAsn2YUGoTuVKNMpicziUmU-1J/view?usp=sharing",
"url": "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/candidate_masks_flexible_size_and_spacing_3000.json",
},
{
"path": "configs/all_anatomy_size_condtions.json",
"url": "https://drive.google.com/file/d/1AJyt1DSoUd2x2AOQOgM7IxeSyo4MXNX0/view?usp=sharing",
"url": "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/all_anatomy_size_condtions.json",
},
{
"path": "datasets/all_masks_flexible_size_and_spacing_3000.zip",
"url": "https://drive.google.com/file/d/16MKsDKkHvDyF2lEir4dzlxwex_GHStUf/view?usp=sharing",
"url": "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_maisi_all_masks_flexible_size_and_spacing_3000.zip",
},
]

Expand Down
2 changes: 1 addition & 1 deletion modules/benchmark_global_mutual_information.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@
" os.makedirs(directory, exist_ok=True)\n",
"root_dir = tempfile.mkdtemp() if directory is None else directory\n",
"print(f\"root dir is: {root_dir}\")\n",
"file_url = \"https://drive.google.com/uc?id=17tsDLvG_GZm7a4fCVMCv-KyDx0hqq1ji\"\n",
"file_url = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/Prostate_T2W_AX_1.nii\"\n",
"file_path = f\"{root_dir}/Prostate_T2W_AX_1.nii\"\n",
"download_url(file_url, file_path)"
]
Expand Down
2 changes: 1 addition & 1 deletion modules/engines/gan_training.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
Sample script using MONAI to train a GAN to synthesize images from a latent code.
## Get the dataset
MedNIST.tar.gz link: https://drive.google.com/uc?id=1QsnnkvZyJPcbRoV_ArW8SnE1OTuoVbKE
MedNIST.tar.gz link: https://developer.download.nvidia.com/assets/Clara/monai/tutorials/MedNIST.tar.gz
Extract tarball and set input_dir variable. GAN script trains using hand CT scan jpg images.
Dataset information available in MedNIST Tutorial
Expand Down
2 changes: 1 addition & 1 deletion modules/public_datasets.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -595,7 +595,7 @@
"outputs": [],
"source": [
"class IXIDataset(Randomizable, CacheDataset):\n",
" resource = \"https://drive.google.com/file/d/1f5odq9smadgeJmDeyEy_UOjEtE_pkKc0/view?usp=sharing\"\n",
" resource = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/IXI-T1.tar\"\n",
" md5 = \"34901a0593b41dd19c1a1f746eac2d58\"\n",
"\n",
" def __init__(\n",
Expand Down
2 changes: 1 addition & 1 deletion modules/resample_benchmark.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@
"text": [
"\n",
"Downloading...\n",
"From: https://drive.google.com/uc?id=17tsDLvG_GZm7a4fCVMCv-KyDx0hqq1ji\n",
"From: https://developer.download.nvidia.com/assets/Clara/monai/tutorials/Prostate_T2W_AX_1.nii\n",
"To: /tmp/tmp2euy74rf/mri.nii\n",
"100%|██████████| 12.1M/12.1M [00:00<00:00, 210MB/s]"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@
"\n",
" - If you are going to use full dataset of TotalSegmentator, please refer to the dataset link, download the data, create and preprocess the images following [this page](https://zenodo.org/record/6802614).\n",
" \n",
" - In this tutorial, we prepared a sample subset, resampled and ready to use. The subset is only for demonstration. Download [here](https://drive.google.com/file/d/1DtDmERVMjks1HooUhggOKAuDm0YIEunG/view?usp=sharing).\n",
" - In this tutorial, we prepared a sample subset, resampled and ready to use. The subset is only for demonstration. Download [here](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/totalSegmentator_mergedLabel_samples.zip).\n",
" \n",
" To use the bundle, users need to download the data and merge all annotated labels into one NIFTI file. Each file contains 0-104 values, each value represents one anatomy class.\n",
" \n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,6 @@ completed, the dataset can be readily used for the tutorial.
1) Create a new folder named 'monai_data' for downloading the raw data and preprocessing.
2) Download the chest X-ray images in PNG format from this [link](https://openi.nlm.nih.gov/imgs/collections/NLMCXR_png.tgz). Copy the downloaded file (NLMCXR_png.tgz) to 'monai_data' directory and extract it to 'monai_data/dataset_orig/NLMCXR_png/'.
3) Download the reports in XML format from this [link](https://openi.nlm.nih.gov/imgs/collections/NLMCXR_reports.tgz). Copy the downloaded file (NLMCXR_reports.tgz) to 'monai_data' directory and extract it to 'monai_data/dataset_orig/NLMCXR_reports/'.
4) Download the splits of train, validation and test datasets from this [link](https://drive.google.com/u/1/uc?id=1jvT0jVl9mgtWy4cS7LYbF43bQE4mrXAY&export=download). Copy the downloaded file (TransChex_openi.zip)
4) Download the splits of train, validation and test datasets from this [link](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/TransChex_openi.zip). Copy the downloaded file (TransChex_openi.zip)
to 'monai_data' directory and extract it here.
5) Run 'preprocess_openi.py' to process the images and reports.
2 changes: 1 addition & 1 deletion pathology/multiple_instance_learning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ python ./panda_mil_train_evaluate_pytorch_gpu.py -h

Train in multi-gpu mode with AMP using all available gpus,
assuming the training images are in the `/PandaChallenge2020/train_images` folder,
it will use the pre-defined 80/20 data split in [datalist_panda_0.json](https://drive.google.com/drive/u/0/folders/1CAHXDZqiIn5QUfg5A7XsK1BncRu6Ftbh)
it will use the pre-defined 80/20 data split in [datalist_panda_0.json](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/datalist_panda_0.json)

```bash
python -u panda_mil_train_evaluate_pytorch_gpu.py \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -530,7 +530,7 @@ def parse_args():

if args.dataset_json is None:
# download default json datalist
resource = "https://drive.google.com/uc?id=1L6PtKBlHHyUgTE4rVhRuOLTQKgD4tBRK"
resource = "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/datalist_panda_0.json"
dst = "./datalist_panda_0.json"
if not os.path.exists(dst):
gdown.download(resource, dst, quiet=False)
Expand Down
Loading

0 comments on commit 3c891ec

Please sign in to comment.