Skip to content

Commit

Permalink
Merge pull request #244 from jamesobutler/maintenance-tasks
Browse files Browse the repository at this point in the history
Testing Maintenance tasks
  • Loading branch information
wasserth authored Jan 11, 2024
2 parents 1dbe189 + ff6a733 commit 4cb664c
Show file tree
Hide file tree
Showing 38 changed files with 414 additions and 257 deletions.
12 changes: 12 additions & 0 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Set update schedule for GitHub Actions

version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
# Check for updates to GitHub Actions every week
interval: "weekly"
commit-message:
# Prefix all commit messages with "CI: "
prefix: "CI"
23 changes: 23 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# GitHub Action to automate the identification of common misspellings in text files.
# https://github.com/codespell-project/actions-codespell
# https://github.com/codespell-project/codespell
name: codespell
on:
# Triggers the workflow on push or pull request events
push:
branches: [ master ]
pull_request:
branches: [ master ]

permissions:
contents: read

jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
- uses: codespell-project/actions-codespell@94259cd8be02ad2903ba34a22d9c13de21a74461 # v2.0
with:
check_filenames: true
25 changes: 25 additions & 0 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
name: CI (Lint)

on:
# Triggers the workflow on push or pull request events
push:
branches: [ master ]
pull_request:
branches:
- "*"

permissions:
contents: read

jobs:
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1

- uses: actions/setup-python@65d7f2d534ac1bc67fcd62888c5f4f3d2cb2b236 # v4.7.1
with:
python-version: '3.9'

- uses: pre-commit/action@646c83fcd040023954eafda54b4db0192ce70507 # v3.0.0
2 changes: 1 addition & 1 deletion .github/workflows/run_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ jobs:
uses: actions/setup-python@v2
with:
python-version: ${{matrix.python-version}}

- name: Install dependencies
run: |
python -m pip install --upgrade pip
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/run_tests_os.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ jobs:
os: [ubuntu-latest, windows-latest, macos-latest]
python-version: ["3.10"]
runs-on: ${{ matrix.os }}

steps:
- uses: actions/checkout@v2

Expand All @@ -27,7 +27,7 @@ jobs:
pip install pytest Cython
pip install torch==2.0.0 -f https://download.pytorch.org/whl/cpu
pip install .
- name: Install dependencies on Windows / MacOS
if: runner.os == 'Windows' || runner.os == 'macOS'
run: |
Expand Down
32 changes: 32 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: "v4.4.0"
hooks:
- id: check-added-large-files
args: ['--maxkb=1024']
- id: check-ast
- id: check-case-conflict
- id: check-merge-conflict
- id: check-symlinks
- id: trailing-whitespace

- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.1.7
hooks:
# Run the linter.
- id: ruff
args: [ --fix ]

- repo: https://github.com/asottile/pyupgrade
rev: v3.15.0
hooks:
- id: pyupgrade
args: [--py39-plus]

- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
hooks:
- id: codespell
args: [
"--write-changes"
]
32 changes: 16 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,15 @@ Tool for segmentation of over 117 classes in CT images. It was trained on a wide

![Alt text](resources/imgs/overview_classes_2.png)

Created by the department of [Research and Analysis at University Hospital Basel](https://www.unispital-basel.ch/en/radiologie-nuklearmedizin/forschung-radiologie-nuklearmedizin).
Created by the department of [Research and Analysis at University Hospital Basel](https://www.unispital-basel.ch/en/radiologie-nuklearmedizin/forschung-radiologie-nuklearmedizin).
If you use it please cite our [Radiology AI paper](https://pubs.rsna.org/doi/10.1148/ryai.230024). Please also cite [nnUNet](https://github.com/MIC-DKFZ/nnUNet) since TotalSegmentator is heavily based on it.


### Installation

TotalSegmentator works on Ubuntu, Mac, and Windows and on CPU and GPU.

Install dependencies:
Install dependencies:
* Python >= 3.9
* [Pytorch](http://pytorch.org/) >= 1.12.1

Expand Down Expand Up @@ -45,7 +45,7 @@ TotalSegmentator -i ct.nii.gz -o segmentations

Next to the default task (`total`) there are more subtasks with more classes:

Openly available for any usage:
Openly available for any usage:
* **total**: default task containing 117 main classes (see [here](https://github.com/wasserth/TotalSegmentator#class-details) for a list of classes)
* **lung_vessels**: lung_vessels (cite [paper](https://www.sciencedirect.com/science/article/pii/S0720048X22001097)), lung_trachea_bronchia
* **body**: body, body_trunc, body_extremities, skin
Expand All @@ -56,7 +56,7 @@ Openly available for any usage:

*: These models are not trained on the full totalsegmentator dataset but on some small other datasets. Therefore, expect them to work less robustly.

Available with a license (free licenses available for non-commercial usage [here](https://backend.totalsegmentator.com/license-academic/). For a commercial license contact jakob.wasserthal@usb.ch):
Available with a license (free licenses available for non-commercial usage [here](https://backend.totalsegmentator.com/license-academic/). For a commercial license contact jakob.wasserthal@usb.ch):
* **heartchambers_highres**: myocardium, atrium_left, ventricle_left, atrium_right, ventricle_right, aorta, pulmonary_artery (trained on sub-millimeter resolution)
* **appendicular_bones**: patella, tibia, fibula, tarsal, metatarsal, phalanges_feet, ulna, radius, carpal, metacarpal, phalanges_hand
* **tissue_types**: subcutaneous_fat, skeletal_muscle, torso_fat
Expand Down Expand Up @@ -87,17 +87,17 @@ docker run --gpus 'device=0' --ipc=host -v /absolute/path/to/my/data/directory:/


### Running v1
If you want to keep on using TotalSegmentator v1 (e.g. because you do not want to change your pipeline) you
If you want to keep on using TotalSegmentator v1 (e.g. because you do not want to change your pipeline) you
can install it with the following command:
```
pip install TotalSegmentator==1.5.7
```
The documentation for v1 can be found [here](https://github.com/wasserth/TotalSegmentator/tree/v1.5.7). Bugfixes for v1 are developed in the branch `v1_bugfixes`.
Our Radiology AI publication refers to TotalSegmentator v1.
Our Radiology AI publication refers to TotalSegmentator v1.


### Resource Requirements
Totalsegmentator has the following runtime and memory requirements (using an Nvidia RTX 3090 GPU):
Totalsegmentator has the following runtime and memory requirements (using an Nvidia RTX 3090 GPU):
(1.5mm is the normal model and 3mm is the `--fast` model. With v2 the runtimes have increased a bit since
we added more classes.)

Expand All @@ -112,8 +112,8 @@ If you want to reduce memory consumption you can use the following options:


### Train/validation/test split
The exact split of the dataset can be found in the file `meta.csv` inside of the [dataset](https://doi.org/10.5281/zenodo.6802613). This was used for the validation in our paper.
The exact numbers of the results for the high-resolution model (1.5mm) can be found [here](resources/results_all_classes_v1.json). The paper shows these numbers in the supplementary materials Figure 11.
The exact split of the dataset can be found in the file `meta.csv` inside of the [dataset](https://doi.org/10.5281/zenodo.6802613). This was used for the validation in our paper.
The exact numbers of the results for the high-resolution model (1.5mm) can be found [here](resources/results_all_classes_v1.json). The paper shows these numbers in the supplementary materials Figure 11.


### Retrain model and run evaluation
Expand All @@ -126,7 +126,7 @@ If you want to combine some subclasses (e.g. lung lobes) into one binary mask (e
totalseg_combine_masks -i totalsegmentator_output_dir -o combined_mask.nii.gz -m lung
```

Normally weights are automatically downloaded when running TotalSegmentator. If you want to download the weights with an extra command (e.g. when building a docker container) use this:
Normally weights are automatically downloaded when running TotalSegmentator. If you want to download the weights with an extra command (e.g. when building a docker container) use this:
```
totalseg_download_weights -t <task_name>
```
Expand Down Expand Up @@ -155,7 +155,7 @@ pip install git+https://github.com/wasserth/TotalSegmentator.git

### Typical problems

**ITK loading Error**
**ITK loading Error**
When you get the following error message
```
ITK ERROR: ITK only supports orthonormal direction cosines. No orthonormal definition was found!
Expand All @@ -166,12 +166,12 @@ pip install SimpleITK==2.0.2
```

Alternatively you can try
```
```
fslorient -copysform2qform input_file
fslreorient2std input_file output_file
```

**Bad segmentations**
**Bad segmentations**
When you get bad segmentation results check the following:
* does your input image contain the original HU values or are the intensity values rescaled to a different range?
* is the patient normally positioned in the image? (In axial view is the spine at the bottom of the image? In the coronal view is the head at the top of the image?)
Expand All @@ -181,21 +181,21 @@ When you get bad segmentation results check the following:
TotalSegmentator sends anonymous usage statistics to help us improve it further. You can deactivate it by setting `send_usage_stats` to `false` in `~/.totalsegmentator/config.json`.


### Reference
### Reference
For more details see our [Radiology AI paper](https://pubs.rsna.org/doi/10.1148/ryai.230024) ([freely available preprint](https://arxiv.org/abs/2208.05868)).
If you use this tool please cite it as follows
```
Wasserthal, J., Breit, H.-C., Meyer, M.T., Pradella, M., Hinck, D., Sauter, A.W., Heye, T., Boll, D., Cyriac, J., Yang, S., Bach, M., Segeroth, M., 2023. TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. Radiology: Artificial Intelligence. https://doi.org/10.1148/ryai.230024
```
Please also cite [nnUNet](https://github.com/MIC-DKFZ/nnUNet) since TotalSegmentator is heavily based on it.
Please also cite [nnUNet](https://github.com/MIC-DKFZ/nnUNet) since TotalSegmentator is heavily based on it.
Moreover, we would really appreciate it if you let us know what you are using this tool for. You can also tell us what classes we should add in future releases. You can do so [here](https://github.com/wasserth/TotalSegmentator/issues/1).


### Class details

The following table shows a list of all classes.

TA2 is a standardized way to name anatomy. Mostly the TotalSegmentator names follow this standard.
TA2 is a standardized way to name anatomy. Mostly the TotalSegmentator names follow this standard.
For some classes they differ which you can see in the table below.

[Here](resources/totalsegmentator_snomed_mapping.csv) you can find a mapping of the TotalSegmentator classes to SNOMED-CT codes.
Expand Down
69 changes: 69 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
[tool.ruff]
# Exclude a variety of commonly ignored directories.
exclude = [
".bzr",
".direnv",
".eggs",
".git",
".git-rewrite",
".hg",
".mypy_cache",
".nox",
".pants.d",
".pytype",
".ruff_cache",
".svn",
".tox",
".venv",
"__pypackages__",
"_build",
"buck-out",
"build",
"dist",
"node_modules",
"venv",
]

line-length = 550
indent-width = 4
target-version = "py39"

[tool.ruff.lint]
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
# Unlike Flake8, Ruff doesn't enable pycodestyle warnings (`W`) or
# McCabe complexity (`C901`) by default.
select = [
"E4", # Pycodestyle: Import
"E7", # Pycodestyle: Statement
"E9", # Pycodestyle: Runtime
"F" # Pyflakes: All codes
]
ignore = [
"E402", # module level import not at top of file
"E701", # multiple statements on one line (colon)
"E721", # do not compare types, use isinstance()
"E741", # do not use variables named l, O, or I
"F401", # module imported but unused
"F821", # undefined name
"F841" # local variable name is assigned to but never used
]

# Allow fix for all enabled rules (when `--fix`) is provided.
fixable = ["ALL"]
unfixable = []

# Allow unused variables when underscore-prefixed.
dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"

[tool.ruff.format]
# Like Black, use double quotes for strings.
quote-style = "double"

# Like Black, indent with spaces, rather than tabs.
indent-style = "space"

# Like Black, respect magic trailing commas.
skip-magic-trailing-comma = false

# Like Black, automatically detect the appropriate line ending.
line-ending = "auto"
20 changes: 10 additions & 10 deletions resources/convert_dataset_to_nnunet.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ def combine_labels(ref_img, file_out, masks):
ref_img = nib.load(ref_img)
combined = np.zeros(ref_img.shape).astype(np.uint8)
for idx, arg in enumerate(masks):
file_in = Path(arg)
file_in = Path(arg)
if file_in.exists():
img = nib.load(file_in)
combined[img.get_fdata() > 0] = idx+1
Expand All @@ -60,26 +60,26 @@ def combine_labels(ref_img, file_out, masks):
nib.save(nib.Nifti1Image(combined.astype(np.uint8), ref_img.affine), file_out)


if __name__ == "__main__":
if __name__ == "__main__":
"""
Convert the downloaded TotalSegmentator dataset (after unzipping it) to nnUNet format and
Convert the downloaded TotalSegmentator dataset (after unzipping it) to nnUNet format and
generate dataset.json and splits_final.json
example usage:
example usage:
python convert_dataset_to_nnunet.py /my_downloads/TotalSegmentator_dataset /nnunet/raw/Dataset100_TotalSegmentator_part1 class_map_part_organs
You must set nnUNet_raw and nnUNet_preprocessed environment variables before running this (see nnUNet documentation).
"""

dataset_path = Path(sys.argv[1]) # directory containining all the subjects
dataset_path = Path(sys.argv[1]) # directory containing all the subjects
nnunet_path = Path(sys.argv[2]) # directory of the new nnunet dataset
# TotalSegmentator is made up of 5 models. Choose which one you want to produce. Choose from:
# TotalSegmentator is made up of 5 models. Choose which one you want to produce. Choose from:
# class_map_part_organs
# class_map_part_vertebrae
# class_map_part_cardiac
# class_map_part_muscles
# class_map_part_vertebrae
# class_map_part_cardiac
# class_map_part_muscles
# class_map_part_ribs
class_map_name = sys.argv[3]
class_map_name = sys.argv[3]

class_map = class_map_5_parts[class_map_name]

Expand Down
Loading

0 comments on commit 4cb664c

Please sign in to comment.