From f8239f9964a50a2efe146ec107800d6499de5027 Mon Sep 17 00:00:00 2001 From: YunLiu <55491388+KumoLiu@users.noreply.github.com> Date: Tue, 7 May 2024 11:55:01 +0800 Subject: [PATCH] Typo fix under auto3dseg folder (#1706) ### Checks - [ ] Avoid including large-size files in the PR. - [ ] Clean up long text outputs from code cells in the notebook. - [ ] For security purposes, please check the contents and remove any sensitive info such as user names and private key. - [ ] Ensure (1) hyperlinks and markdown anchors are working (2) use relative paths for tutorial repo files (3) put figure and graphs in the `./figure` folder - [ ] Notebook runs automatically `./runner.sh -t ` --------- Signed-off-by: KumoLiu Signed-off-by: YunLiu <55491388+KumoLiu@users.noreply.github.com> --- auto3dseg/README.md | 2 +- auto3dseg/docs/algorithm_generation.md | 2 +- auto3dseg/docs/hpo.md | 2 +- auto3dseg/notebooks/data_analyzer.ipynb | 21 ++++----------------- auto3dseg/notebooks/ensemble_byoc.ipynb | 2 +- 5 files changed, 8 insertions(+), 21 deletions(-) diff --git a/auto3dseg/README.md b/auto3dseg/README.md index 84699f4535..e13996b07d 100644 --- a/auto3dseg/README.md +++ b/auto3dseg/README.md @@ -54,7 +54,7 @@ We provide [a two-minute example](notebooks/auto3dseg_hello_world.ipynb) for use ## A "Real-World" Example -To further demonstrate the capabilities of **Auto3DSeg**, [here](tasks/instance22) is the detailed performance of the algorithm in **Auto3DSeg**, which won 2nd place in the MICCAI 2022 challenge** [INSTANCE22: The 2022 Intracranial Hemorrhage Segmentation Challenge on Non-Contrast Head CT (NCCT)](https://instance.grand-challenge.org/)** +To further demonstrate the capabilities of **Auto3DSeg**, [here](./tasks/instance22/README.md) is the detailed performance of the algorithm in **Auto3DSeg**, which won 2nd place in the MICCAI 2022 challenge **[INSTANCE22: The 2022 Intracranial Hemorrhage Segmentation Challenge on Non-Contrast Head CT (NCCT)](https://instance.grand-challenge.org/)** ## Reference Python APIs for Auto3DSeg diff --git a/auto3dseg/docs/algorithm_generation.md b/auto3dseg/docs/algorithm_generation.md index 5797b0537f..dfe54d01d7 100644 --- a/auto3dseg/docs/algorithm_generation.md +++ b/auto3dseg/docs/algorithm_generation.md @@ -65,7 +65,7 @@ The code block would generate multiple algorithm bundles as follows. The folder ### Algorithm Templates -The Python class **BundleGen** utilizes [the default algorithm templates](https://github.com/Project-MONAI/research-contributions/tree/main/auto3dseg) implicitly. The default algorithms are based on four established works (DiNTS, SegResNet, SegResNet2D, and SwinUNETR). They support both 3D CT and MR image segmentation. In the template, some items are empty or null, and they will be filled together with dataset information. The part of the configuration file "hyper_parameters.yaml" is shown below. In the configuration, the items (like "bundle_root", "data_file_base_dir", and "patch_size") will be filled up automatically with any user interaction. +The Python class **BundleGen** utilizes [the default algorithm templates](https://github.com/Project-MONAI/research-contributions/tree/main/auto3dseg) implicitly. The default algorithms are based on four established works (DiNTS, SegResNet, SegResNet2D, and SwinUNETR). They support both 3D CT and MR image segmentation. In the template, some items are empty or null, and they will be filled together with dataset information. The part of the configuration file "hyper_parameters.yaml" is shown below. In the configuration, the items (like "bundle_root", "data_file_base_dir", and "patch_size") will be filled up automatically without any user interaction. ``` bundle_root: null diff --git a/auto3dseg/docs/hpo.md b/auto3dseg/docs/hpo.md index b75cfed9d3..6be9fe4a49 100644 --- a/auto3dseg/docs/hpo.md +++ b/auto3dseg/docs/hpo.md @@ -12,7 +12,7 @@ The HPOGen class has a `run_algo()` function, which will be used by the third-pa ### Usage The tutorial on how to use NNIGen is [here](../notebooks/hpo_nni.ipynb) and the tutorial for OptunaGen is [here](../notebooks/hpo_optuna.ipynb). The list of HPO algorithms in NNI and Optuna can be found on [the NNI GitHub page](https://github.com/microsoft/nni) and [Optuna documentation](https://optuna.readthedocs.io/en/stable/reference/samplers/index.html). -For demonstration purposes, both of our tutorials use a grid search HPO algorithm to optimize the learning rate in training. Users can be easily modified to random search and bayesian based methods for more hyperparameters. +For demonstration purposes, both of our tutorials use a grid search HPO algorithm to optimize the learning rate in training. Users can easily switch to random search and bayesian based methods for more hyperparameters. ### Override Specific Parameters in the Algorithms before HPO Users can change **Auto3DSeg** algorithms in HPO by providing a set of overriding parameters. diff --git a/auto3dseg/notebooks/data_analyzer.ipynb b/auto3dseg/notebooks/data_analyzer.ipynb index 0b9de6a050..c0a904ce89 100644 --- a/auto3dseg/notebooks/data_analyzer.ipynb +++ b/auto3dseg/notebooks/data_analyzer.ipynb @@ -105,22 +105,9 @@ " {\"fold\": 1, \"image\": \"tr_image_014.fake.nii.gz\", \"label\": \"tr_label_014.fake.nii.gz\"},\n", " {\"fold\": 1, \"image\": \"tr_image_015.fake.nii.gz\", \"label\": \"tr_label_015.fake.nii.gz\"},\n", " ],\n", - "}" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Generate image data" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ + "}\n", + "\n", + "\n", "def simulate():\n", " test_dir = tempfile.TemporaryDirectory()\n", " dataroot = test_dir.name\n", @@ -165,7 +152,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "If you would like to inspect the data stats, please check the `data_stats.yaml` under the directory\n", + "If you would like to inspect the data stats, please check the `data_stats.yaml` and `datastats_by_case.yaml` under the directory\n", "\n", "Next, we will perform the data analysis on a real-world dataset." ] diff --git a/auto3dseg/notebooks/ensemble_byoc.ipynb b/auto3dseg/notebooks/ensemble_byoc.ipynb index 155ea2f375..d13795d96e 100644 --- a/auto3dseg/notebooks/ensemble_byoc.ipynb +++ b/auto3dseg/notebooks/ensemble_byoc.ipynb @@ -83,7 +83,7 @@ "source": [ "## Simulate a special dataset\n", "\n", - "It is well known that AI takes time to train. To provide the \"Hello World!\" experience of Auto3D in this notebook, we will simulate a small dataset and run training only for multiple epochs. Due to the nature of AI, the performance shouldn't be highly expected, but the entire pipeline will be completed within minutes!\n", + "It is well known that AI takes time to train. To provide the \"Hello World!\" experience of Auto3DSeg in this notebook, we will simulate a small dataset and run training only for multiple epochs. Due to the nature of AI, the performance shouldn't be highly expected, but the entire pipeline will be completed within minutes!\n", "\n", "`sim_datalist` provides the information of the simulated datasets. It lists 12 training and 2 testing images and labels.\n", "The training data are split into 3 folds. Each fold will use 8 images to train and 4 images to validate.\n",