Skip to content

Commit

Permalink
Typo fix under auto3dseg folder (Project-MONAI#1706)
Browse files Browse the repository at this point in the history
### Checks
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [ ] Avoid including large-size files in the PR.
- [ ] Clean up long text outputs from code cells in the notebook.
- [ ] For security purposes, please check the contents and remove any
sensitive info such as user names and private key.
- [ ] Ensure (1) hyperlinks and markdown anchors are working (2) use
relative paths for tutorial repo files (3) put figure and graphs in the
`./figure` folder
- [ ] Notebook runs automatically `./runner.sh -t <path to .ipynb file>`

---------

Signed-off-by: KumoLiu <yunl@nvidia.com>
Signed-off-by: YunLiu <55491388+KumoLiu@users.noreply.github.com>
  • Loading branch information
KumoLiu authored May 7, 2024
1 parent 179d3cd commit f8239f9
Show file tree
Hide file tree
Showing 5 changed files with 8 additions and 21 deletions.
2 changes: 1 addition & 1 deletion auto3dseg/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ We provide [a two-minute example](notebooks/auto3dseg_hello_world.ipynb) for use

## A "Real-World" Example

To further demonstrate the capabilities of **Auto3DSeg**, [here](tasks/instance22) is the detailed performance of the algorithm in **Auto3DSeg**, which won 2nd place in the MICCAI 2022 challenge** [INSTANCE22: The 2022 Intracranial Hemorrhage Segmentation Challenge on Non-Contrast Head CT (NCCT)](https://instance.grand-challenge.org/)**
To further demonstrate the capabilities of **Auto3DSeg**, [here](./tasks/instance22/README.md) is the detailed performance of the algorithm in **Auto3DSeg**, which won 2nd place in the MICCAI 2022 challenge **[INSTANCE22: The 2022 Intracranial Hemorrhage Segmentation Challenge on Non-Contrast Head CT (NCCT)](https://instance.grand-challenge.org/)**

## Reference Python APIs for Auto3DSeg

Expand Down
2 changes: 1 addition & 1 deletion auto3dseg/docs/algorithm_generation.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ The code block would generate multiple algorithm bundles as follows. The folder

### Algorithm Templates

The Python class **BundleGen** utilizes [the default algorithm templates](https://github.com/Project-MONAI/research-contributions/tree/main/auto3dseg) implicitly. The default algorithms are based on four established works (DiNTS, SegResNet, SegResNet2D, and SwinUNETR). They support both 3D CT and MR image segmentation. In the template, some items are empty or null, and they will be filled together with dataset information. The part of the configuration file "hyper_parameters.yaml" is shown below. In the configuration, the items (like "bundle_root", "data_file_base_dir", and "patch_size") will be filled up automatically with any user interaction.
The Python class **BundleGen** utilizes [the default algorithm templates](https://github.com/Project-MONAI/research-contributions/tree/main/auto3dseg) implicitly. The default algorithms are based on four established works (DiNTS, SegResNet, SegResNet2D, and SwinUNETR). They support both 3D CT and MR image segmentation. In the template, some items are empty or null, and they will be filled together with dataset information. The part of the configuration file "hyper_parameters.yaml" is shown below. In the configuration, the items (like "bundle_root", "data_file_base_dir", and "patch_size") will be filled up automatically without any user interaction.

```
bundle_root: null
Expand Down
2 changes: 1 addition & 1 deletion auto3dseg/docs/hpo.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ The HPOGen class has a `run_algo()` function, which will be used by the third-pa

### Usage
The tutorial on how to use NNIGen is [here](../notebooks/hpo_nni.ipynb) and the tutorial for OptunaGen is [here](../notebooks/hpo_optuna.ipynb). The list of HPO algorithms in NNI and Optuna can be found on [the NNI GitHub page](https://github.com/microsoft/nni) and [Optuna documentation](https://optuna.readthedocs.io/en/stable/reference/samplers/index.html).
For demonstration purposes, both of our tutorials use a grid search HPO algorithm to optimize the learning rate in training. Users can be easily modified to random search and bayesian based methods for more hyperparameters.
For demonstration purposes, both of our tutorials use a grid search HPO algorithm to optimize the learning rate in training. Users can easily switch to random search and bayesian based methods for more hyperparameters.

### Override Specific Parameters in the Algorithms before HPO
Users can change **Auto3DSeg** algorithms in HPO by providing a set of overriding parameters.
Expand Down
21 changes: 4 additions & 17 deletions auto3dseg/notebooks/data_analyzer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -105,22 +105,9 @@
" {\"fold\": 1, \"image\": \"tr_image_014.fake.nii.gz\", \"label\": \"tr_label_014.fake.nii.gz\"},\n",
" {\"fold\": 1, \"image\": \"tr_image_015.fake.nii.gz\", \"label\": \"tr_label_015.fake.nii.gz\"},\n",
" ],\n",
"}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Generate image data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"}\n",
"\n",
"\n",
"def simulate():\n",
" test_dir = tempfile.TemporaryDirectory()\n",
" dataroot = test_dir.name\n",
Expand Down Expand Up @@ -165,7 +152,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"If you would like to inspect the data stats, please check the `data_stats.yaml` under the directory\n",
"If you would like to inspect the data stats, please check the `data_stats.yaml` and `datastats_by_case.yaml` under the directory\n",
"\n",
"Next, we will perform the data analysis on a real-world dataset."
]
Expand Down
2 changes: 1 addition & 1 deletion auto3dseg/notebooks/ensemble_byoc.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@
"source": [
"## Simulate a special dataset\n",
"\n",
"It is well known that AI takes time to train. To provide the \"Hello World!\" experience of Auto3D in this notebook, we will simulate a small dataset and run training only for multiple epochs. Due to the nature of AI, the performance shouldn't be highly expected, but the entire pipeline will be completed within minutes!\n",
"It is well known that AI takes time to train. To provide the \"Hello World!\" experience of Auto3DSeg in this notebook, we will simulate a small dataset and run training only for multiple epochs. Due to the nature of AI, the performance shouldn't be highly expected, but the entire pipeline will be completed within minutes!\n",
"\n",
"`sim_datalist` provides the information of the simulated datasets. It lists 12 training and 2 testing images and labels.\n",
"The training data are split into 3 folds. Each fold will use 8 images to train and 4 images to validate.\n",
Expand Down

0 comments on commit f8239f9

Please sign in to comment.