Skip to content

Commit

Permalink
5090 5161 minor doc/env fixes (#5162)
Browse files Browse the repository at this point in the history
Fixes #5161 
part of #5090

### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [x] Non-breaking change (fix or new feature that would not break
existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing
functionality to change).
- [ ] New tests added to cover the changes.
- [ ] Integration tests passed locally by running `./runtests.sh -f -u
--net --coverage`.
- [x] Quick tests passed locally by running `./runtests.sh --quick
--unittests --disttests`.
- [ ] In-line docstrings updated.
- [ ] Documentation updated, tested `make html` command in the `docs/`
folder.

Signed-off-by: Wenqi Li <wenqil@nvidia.com>
  • Loading branch information
wyli authored Sep 16, 2022
1 parent ca90628 commit 70c0443
Show file tree
Hide file tree
Showing 10 changed files with 43 additions and 37 deletions.
2 changes: 1 addition & 1 deletion docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ sphinxcontrib-serializinghtml
sphinx-autodoc-typehints==1.11.1
pandas
einops
transformers
transformers<4.22 # https://github.com/Project-MONAI/MONAI/issues/5157
mlflow
tensorboardX
imagecodecs; platform_system == "Linux"
Expand Down
9 changes: 7 additions & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,6 @@ Its ambitions are:
Features
--------

*The codebase is currently under active development*

- flexible pre-processing for multi-dimensional medical imaging data;
- compositional & portable APIs for ease of integration in existing workflows;
- domain-specific implementations for networks, losses, evaluation metrics and more;
Expand Down Expand Up @@ -72,6 +70,13 @@ Technical documentation is available at `docs.monai.io <https://docs.monai.io>`_

bundle_intro

Model Zoo
---------

`The MONAI Model Zoo <https://github.com/Project-MONAI/model-zoo>`_ is a place for researchers and data scientists to share the latest and great models from the community.
Utilizing `the MONAI Bundle format <https://docs.monai.io/en/latest/bundle_intro.html>`_ makes it easy to get started building workflows with MONAI.


Links
-----

Expand Down
14 changes: 7 additions & 7 deletions docs/source/modules.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ are handled with specific protocols, and the data arrays are often high-dimensio
[`monai.data`](https://github.com/Project-MONAI/MONAI/tree/dev/monai/data) modules include a set of domain-specific APIs
for various deep learning applications:

### Transforms with data in 'array' and 'dictionary' styles
### Transforms with data in array and dictionary styles

![3d transform examples](../images/affine.png)

Expand Down Expand Up @@ -83,7 +83,7 @@ domain-specific usability and pipeline performance.
### Cache IO and transforms data to accelerate training

Data-driven methods require many (potentially thousands of) epochs of training data reading and preprocessing. MONAI
provides multi-threaded cache-based datasets to accelerate the process. [[Datasets experiment]](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/dataset_type_performance.ipynb). The
provides multi-threaded cache-based datasets to accelerate the process [[Datasets experiment]](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/dataset_type_performance.ipynb). The
cache can be persistent and dynamic (`SmartCacheDataset`) and reused across different experiments [[SmartCache example]](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/distributed_training/unet_training_smartcache.py).
The following figure illustrates the training speedup compared with a regular PyTorch program.

Expand All @@ -102,7 +102,7 @@ a `ThreadDataLoader` example is within the [Spleen fast training tutorial](https
### Public datasets

To quickly get started with popular training data, MONAI provides several ready-to-integrate Dataset classes
(such as `MedNISTDataset`, `DecathlonDataset`), which include data downloading, and support training/evaluation splits generation with transforms.
(such as `MedNISTDataset`, `DecathlonDataset`, [`TciaDataset`](https://github.com/Project-MONAI/tutorials/blob/main/modules/tcia_dataset.ipynb)), which include data downloading, and support training/evaluation splits generation with transforms.
[[Public datasets tutorial]](https://github.com/Project-MONAI/tutorials/blob/master/modules/public_datasets.ipynb)
The common workflow of predefined datasets:

Expand Down Expand Up @@ -231,8 +231,8 @@ A typical process of `decollate batch` is illustrated as follows (with a `batch_

Except for the pytorch-ignite based `monai.engines`, most of the MONAI modules could be used independently or combined
with other software packages. For example, MONAI can be easily integrated into popular frameworks such as
PyTorch-Lightning and Catalyst: [Lightning segmentation](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d_lightning.ipynb),
[Catalyst segmentation](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/unet_segmentation_3d_catalyst.ipynb).
PyTorch-Lightning and Catalyst. [[Lightning segmentation](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d_lightning.ipynb),
[Catalyst segmentation](https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/unet_segmentation_3d_catalyst.ipynb)]

## Bundle

Expand Down Expand Up @@ -264,7 +264,7 @@ A typical bundle example can include:
┗━ *license.txt
```
Details about the bundle config definition and syntax & examples are at [config syntax](https://docs.monai.io/en/latest/config_syntax.html).
A step-by-step [get started](https://github.com/Project-MONAI/tutorials/blob/master/modules/bundles/get_started.ipynb) tutorial notebook can help users quickly set up a bundle. [[bundle examples]](https://github.com/Project-MONAI/tutorials/tree/main/modules/bundle)
A step-by-step [get started](https://github.com/Project-MONAI/tutorials/blob/master/modules/bundles/get_started.ipynb) tutorial notebook can help users quickly set up a bundle. [[bundle examples](https://github.com/Project-MONAI/tutorials/tree/main/bundle), [model-zoo](https://github.com/Project-MONAI/model-zoo)]

## Federated Learning

Expand All @@ -288,7 +288,7 @@ with [`ClientAlgo`](https://docs.monai.io/en/latest/fl.html#clientalgo) to allow
It leverages the latest advances in MONAI
and GPUs to efficiently develop and deploy algorithms with state-of-the-art performance.
It first analyzes the global information such as intensity, dimensionality, and resolution of the dataset,
then generates algorithms in MONAI bundle format based on data statistics and algorithm templates.
then generates algorithms in MONAI bundle format based on data statistics and [algorithm templates](https://github.com/Project-MONAI/research-contributions/tree/main/auto3dseg).
Next, all algorithms initiate model training to obtain checkpoints with the best validation performance.
Finally, the ensemble module selects the algorithms via ranking trained checkpoints and creates ensemble predictions.

Expand Down
19 changes: 9 additions & 10 deletions docs/source/whatsnew_1_0.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ For more details about how to use the models, please see [the tutorials](https:/
It leverages the latest advances in MONAI
and GPUs to efficiently develop and deploy algorithms with state-of-the-art performance.
It first analyzes the global information such as intensity, dimensionality, and resolution of the dataset,
then generates algorithms in MONAI bundle format based on data statistics and algorithm templates.
then generates algorithms in MONAI bundle format based on data statistics and [algorithm templates](https://github.com/Project-MONAI/research-contributions/tree/main/auto3dseg).
Next, all algorithms initiate model training to obtain checkpoints with the best validation performance.
Finally, the ensemble module selects the algorithms via ranking trained checkpoints and creates ensemble predictions.

Expand All @@ -42,15 +42,14 @@ collaborative learning in medical imaging.
## MetaTensor Support for Digital Pathology Workflows
![pathology](../images/pathology-meta.png)

MetaTensor is officially released in MONAI v0.9, which is a simple yet elegant way to handle metadata along with the image
in the same object. In this release (v1.0), we support MetaTensor in all digital pathology components, and make sure that
the future development can benefit from them. With the help of MONAI Pathology Working Group, we have standardized a
set of metadata attributes for patches of images extracted from WSI to ensure reproducibility and enhance functionality
via relying on a standard set of attributes. The figure below shows all the pathology metadata attributes,
their definition, and their relation to MetaTensors. Using `LoadImage` transform with WSIReader will output a
MetaTensor with populated metadata inferred from the data file. Currently, WSIReader loads the patches into the CPU memory
but they can be transferred to GPU via appropriate transforms. We are working with the cuCIM team to make sure that we can
bring direct loading of images into the GPU in the future releases.
In this release, we support MetaTensor in all digital pathology components, and
make sure that the future development can benefit from them. With the help of
MONAI Pathology Working Group, we have standardized a set of metadata
attributes for patches of images extracted from WSI to ensure reproducibility
and enhance functionality via relying on a standard set of attributes. The
figure above shows all the pathology metadata attributes and their relation to
MetaTensors. Please see [the tutorials and
examples](https://github.com/Project-MONAI/tutorials/tree/main/pathology).

## Accelerated MRI Reconstruction
![MRI-reconstruction](../images/mri_recon.png)
Expand Down
4 changes: 2 additions & 2 deletions environment-dev.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ dependencies:
- pandas
- requests
- einops
- transformers
- transformers<4.22 # https://github.com/Project-MONAI/MONAI/issues/5157
- mlflow
- tensorboardX
- pyyaml
Expand All @@ -47,7 +47,6 @@ dependencies:
- pynrrd
- pydicom
- h5py
- nni
- optuna
- pip
- pip:
Expand All @@ -65,3 +64,4 @@ dependencies:
- imagecodecs; platform_system == "Linux"
- tifffile; platform_system == "Linux"
- matplotlib!=3.5.0
- nni
8 changes: 4 additions & 4 deletions monai/apps/auto3dseg/auto_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -381,7 +381,7 @@ def set_prediction_params(self, params: Optional[Dict[str, Any]] = None):
def set_hpo_params(self, params: Optional[Dict[str, Any]] = None):
"""
Set parameters for the HPO module and the algos before the training. It will attempt to (1) override bundle
templates with the key-value pairs in ``params`` (2) chagne the config of the HPO module (e.g. NNI) if the
templates with the key-value pairs in ``params`` (2) change the config of the HPO module (e.g. NNI) if the
key is found to be one of:
- "trialCodeDirectory"
Expand All @@ -394,7 +394,7 @@ def set_hpo_params(self, params: Optional[Dict[str, Any]] = None):
Args:
params: a dict that defines the overriding key-value pairs during instantiation of the algo. For
BundleAlgo, it will overide the template config filling.
BundleAlgo, it will override the template config filling.
"""
if params is None:
self.hpo_params = self.train_params
Expand All @@ -406,7 +406,7 @@ def set_nni_search_space(self, search_space):
Set the search space for NNI parameter search.
Args:
search_space: hyper paramter search space in the form of dict. For more information, please check
search_space: hyper parameter search space in the form of dict. For more information, please check
NNI documentation: https://nni.readthedocs.io/en/v2.2/Tutorial/SearchSpaceSpec.html .
"""
value_combinations = 1
Expand Down Expand Up @@ -471,7 +471,7 @@ def set_ensemble_method(self, ensemble_method_name: str = "AlgoEnsembleBestN", *

def _train_algo_in_sequence(self, history: List[Dict[str, Any]]):
"""
Train the Algos in a seqential scheme. The order of training is randomized.
Train the Algos in a sequential scheme. The order of training is randomized.
Args:
history: the history of generated Algos. It is a list of dicts. Each element has the task name
Expand Down
12 changes: 7 additions & 5 deletions monai/apps/auto3dseg/bundle_gen.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ def __init__(self, template_path: str):

def set_data_stats(self, data_stats_files: str): # type: ignore
"""
Set the data anlysis report (generated by DataAnalyzer).
Set the data analysis report (generated by DataAnalyzer).
Args:
data_stats_files: path to the datastats yaml file
Expand Down Expand Up @@ -253,8 +253,8 @@ def infer(self, image_file):
def predict(self, predict_params=None):
"""
Use the trained model to predict the outputs with a given input image. Path to input image is in the params
dict in a form of {"files", ["path_to_image_1", "path_to_image_2"]}. If it is not specified, then the pre-
diction will use the test images predefined in the bundle config.
dict in a form of {"files", ["path_to_image_1", "path_to_image_2"]}. If it is not specified, then the
prediction will use the test images predefined in the bundle config.
Args:
predict_params: a dict to override the parameters in the bundle config (including the files to predict).
Expand Down Expand Up @@ -295,7 +295,9 @@ class BundleGen(AlgoGen):
Args:
algo_path: the directory path to save the algorithm templates. Default is the current working dir.
algos: if dictionary, it outlines the algorithm to use. if None, automatically download the zip file
from the defatult link. if string, it represents the download link.
from the default link. if string, it represents the download link.
The current default options are released at:
https://github.com/Project-MONAI/research-contributions/tree/main/auto3dseg
data_stats_filename: the path to the data stats file (generated by DataAnalyzer)
data_src_cfg_name: the path to the data source config YAML file. The config will be in a form of
{"modality": "ct", "datalist": "path_to_json_datalist", "dataroot": "path_dir_data"}
Expand Down Expand Up @@ -332,7 +334,7 @@ def __init__(self, algo_path: str = ".", algos=None, data_stats_filename=None, d
├── configs
│ ├── hyperparameters.yaml # automatically generated yaml from a set of ``template_configs``
│ ├── network.yaml # automatically generated network yaml from a set of ``template_configs``
│ ├── transforms_train.yaml # automatically generated yaml to define tranforms for training
│ ├── transforms_train.yaml # automatically generated yaml to define transforms for training
│ ├── transforms_validate.yaml # automatically generated yaml to define transforms for validation
│ └── transforms_infer.yaml # automatically generated yaml to define transforms for inference
└── scripts
Expand Down
6 changes: 3 additions & 3 deletions monai/apps/auto3dseg/ensemble_builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ def ensemble_pred(self, preds, sigmoid=True):
ensemble the results using either "mean" or "vote" method
Args:
preds: a list of probablity prediction in Tensor-Like format.
preds: a list of probability prediction in Tensor-Like format.
sigmoid: use the sigmoid function to threshold probability one-hot map.
Returns:
Expand Down Expand Up @@ -205,7 +205,7 @@ class AlgoEnsembleBestByFold(AlgoEnsemble):
Ensemble method that select the best models that are the tops in each fold.
Args:
n_fold: number of cross-valiation folds used in training
n_fold: number of cross-validation folds used in training
"""

def __init__(self, n_fold: int = 5):
Expand Down Expand Up @@ -291,7 +291,7 @@ def add_inferer(self, identifier: str, gen_algo: BundleAlgo, best_metric: Option
"""

if best_metric is None:
raise ValueError("Feature to re-valiate is to be implemented")
raise ValueError("Feature to re-validate is to be implemented")

algo = {AlgoEnsembleKeys.ID: identifier, AlgoEnsembleKeys.ALGO: gen_algo, AlgoEnsembleKeys.SCORE: best_metric}
self.infer_algos.append(algo)
Expand Down
2 changes: 1 addition & 1 deletion monai/apps/auto3dseg/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ def export_bundle_algo_history(history: List[Dict[str, BundleAlgo]]):
Save all the BundleAlgo in the history to algo_object.pkl in each individual folder
Args:
history: a List of Bundle. Typicall the history can be obtained from BundleGen get_history method
history: a List of Bundle. Typically, the history can be obtained from BundleGen get_history method
"""
for task in history:
for _, algo in task.items():
Expand Down
4 changes: 2 additions & 2 deletions setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ all =
imagecodecs
pandas
einops
transformers
transformers<4.22
mlflow
matplotlib
tensorboardX
Expand Down Expand Up @@ -93,7 +93,7 @@ pandas =
einops =
einops
transformers =
transformers
transformers<4.22
mlflow =
mlflow
matplotlib =
Expand Down

0 comments on commit 70c0443

Please sign in to comment.