Skip to content

What Do AEs Learn? Challenging Common Assumptions in Unsupervised Anomaly Detection

License

Notifications You must be signed in to change notification settings

ci-ber/MorphAEus

Repository files navigation


What Do AEs Learn? Challenging Common Assumptions in Unsupervised Anomaly Detection

Cosmin BerceaDaniel Rueckert Julia A. Schnabel

Official repository of the paper

MICCAI 2023

Citation

If you find our work useful, please cite our paper:

@article{bercea2022we,
  title={What do we learn? debunking the myth of unsupervised outlier detection},
  author={Bercea, Cosmin I and Rueckert, Daniel and Schnabel, Julia A},
  journal={arXiv preprint arXiv:2206.03698},
  year={2022}
}

Abstract: Detecting abnormal findings in medical images is a critical task that enables timely diagnoses, effective screening, and urgent case prioritization. Autoencoders (AEs) have emerged as a popular choice for anomaly detection and have achieved state-of-the-art (SOTA) performance in detecting pathology. However, their effectiveness is often hindered by the assumption that the learned manifold only contains information that is important for describing samples within the training distribution. In this work, we challenge this assumption and investigate what AEs actually learn when they are posed to solve anomaly detection tasks. We have found that standard, variational, and recent adversarial AEs are generally not well-suited for pathology detection tasks where the distributions of normal and abnormal strongly overlap. In this work, we propose MorphAEus, novel deformable AEs to produce pseudo-healthy reconstructions refined by estimated dense deformation fields. Our approach improves the learned representations, leading to more accurate reconstructions, reduced false positives and precise localization of pathology. We extensively validate our method on two public datasets and demonstrate SOTA performance in detecting pneumonia and COVID-19.

Setup and Run

The code is based on the deep learning framework from the Institute of Machine Learning in Biomedical Imaging: https://github.com/compai-lab/iml-dl

Framework Overview:

Sign up for a free account and login to your wandb account.

wandb login

Paste the API key from https://wandb.ai/authorize when prompted.

2). Clone repository

git clone https://github.com/ci-ber/MorphAEus.git
cd MorphAEus

3). Create a virtual environment with the needed packages (use conda_environment-osx.yaml for macOS)

cd ${TARGET_DIR}/MorphAEus
conda env create -f conda_environment.yaml
source activate py308 *or* conda activate py308

4). Install PyTorch

Example installation:

  • with cuda:
pip3 install torch==1.9.1+cu111 torchvision==0.10.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html
  • w/o cuda:
pip3 install torch==1.9.1 torchvision==0.10.1 -f https://download.pytorch.org/whl/torch_stable.html

5). Download datasets

Alternatively you can use your own chest X-Ray images with our pre-trained weights (weights/MorphAEus) or train from scratch on other anatomies and modalities.

Move the datasets to the expected paths (listed in the data/splits csv files)

6). Run the pipeline

[Optional] set config 'task' to test and load model from ./weights/MorphAEus/best_model.pt

python core/Main.py --config_path projects/23_morphaeus/configs/cxr/morphaeus.yaml

Refer to *.yaml files for experiment configurations.

That's it, enjoy! 🚀

About

What Do AEs Learn? Challenging Common Assumptions in Unsupervised Anomaly Detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages