Skip to content

jbayrooti/divmaker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multispectral Contrastive Learning with Viewmaker Networks

Jasmine Bayrooti, Noah Goodman, and Alex Tamkin

Paper link: https://arxiv.org/abs/2302.05757

0) Background

Multispectral satellite images can capture rich information in images by measuring light beyond the visible spectrum. However, self-supervised learning is challenging in this domain due to there being fewer pre-existing data augmentations. Viewmaker networks learn to produce appropriate augmentations for general data, enabling contrastive learning applications to many domains and modalities. In this project, we apply Viewmaker networks to four different multispectral imaging problems to demonstrate that these domain-agnostic learning methods can provide valuable performance gains over existing domain-specific deep learning methods for multispectral satellite images.

1) Install Dependencies

We used the following PyTorch libraries for CUDA 10.1; you may have to adapt for your own CUDA version:

pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html

Install other dependencies:

conda install scipy
pip install -r requirements.txt

2) Running experiments

Start by running

source init_env.sh

Run experiments for the different datasets as follows:

scripts/run.py config/eurosat/pretrain_eurosat_simclr_L1_forced.json --gpu-device 0

This command runs Viewmaker pretraining on the EuroSAT multispectral satellite dataset using GPU 0. The config directory holds configuration files for the different experiments, specifying the hyperparameters for each experiment. The first field in every config file is exp_base, which specifies the base directory to save experiment outputs. You should change this for your own setup and also update the dataset paths in src/datasets/root_paths.py. The experiments include standard Viewmaker pretraining, Divmaker pretraining, default views pretraining, and the associated linear protocol for transfer training.

Training curves and other metrics are logged using wandb.ai.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages