Skip to content

Official pytorch code for "Hi-End-MAE: Hierarchical encoder-driven masked autoencoders are stronger vision learners for medical image segmentation"

License

Notifications You must be signed in to change notification settings

FengheTan9/Hi-End-MAE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hi-End-MAE: Hierarchical encoder-driven masked autoencoders are stronger vision learners for medical image segmentation

HiEndMAE

Fenghe Tang1,2, Qingsong Yao3, Wenxin Ma1,2, Chenxu Wu1,2, Zihang Jiang1,2, S.Kevin Zhou1,2



arXiv github License: Apache2.0

News

[2024/02/14] Paper and code released !

Getting Started

Prepare Environment

conda create -n HiEndMAE python=3.9
conda activate HiEndMAE
pip install torch==1.13.0 torchvision==0.14.0 torchaudio==0.13.0
pip install packaging timm==0.5.4
pip install transformers==4.34.1 typed-argument-parser
pip install numpy==1.21.2 opencv-python==4.5.5.64 opencv-python-headless==4.5.5.64
pip install 'monai[all]'
pip install monai==1.2.0

Prepare Datasets

We recommend you to convert the dataset into the nnUNet format.

└── Hi-End-MAE
    ├── data
        ├── Dataset001_BTCV
            └── imagesTr
                ├── xxx_0000.nii.gz
                ├── ...
        ├── Dataset006_FLARE2022
            └── imagesTr
                ├── xxx_0000.nii.gz
                ├── ...
        └── Other_dataset
            └── imagesTr
                ├── xxx_0000.nii.gz
                ├── ...

Start Training

Run training on multi-GPU :

# An example of training on 4 GPUs with DDP
torchrun --nproc_per_node=4 --nnodes=1 --node_rank=0 --master_addr=localhost --master_port=12351 main.py

Fine-tuning

Load pre-training weights :

# An example of Fine-tuning on BTCV (num_classes=14)
from downstream.factory import load_hi_end_mae_10k

model = load_hi_end_mae_10k(n_classes=14)

HiEndMAE

Citation

If the code, paper and weights help your research, please cite:


License

This project is released under the Apache 2.0 license. Please see the LICENSE file for more information.

About

Official pytorch code for "Hi-End-MAE: Hierarchical encoder-driven masked autoencoders are stronger vision learners for medical image segmentation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages