This project implements a deep learning model using U-Net architecture for remote sensing image segmentation, aimed at early fire detection. It leverages a pretrained ResNet34 backbone for feature extraction, and it is designed to detect areas where fires are beginning by analyzing satellite images.
The goal of this project is to train a U-Net model to segment satellite images and detect fire-prone areas. The model uses a ResNet34 backbone pretrained on ImageNet for feature extraction, which is fine-tuned for the task of detecting early fire signs in satellite images.
This project requires the following Python packages:
numpy
: For numerical operations.matplotlib
: For visualizing data and results.tqdm
: For progress bars during training and evaluation.scikit-learn
: For machine learning tools.torch
: For PyTorch deep learning framework.torchvision
: For image transformations and vision-based operations.segmentation-models-pytorch
: For pre-implemented segmentation models including U-Net.datasets
: For loading and processing datasets from the Hugging Face hub.
These dependencies are listed in the environment.yml
and requirements.txt
files.
-
Clone the repository:
git clone https://github.com/your_username/yekhanfir-early-fire-detection-from-satelite-images-with-u-net.git
-
Navigate into the project directory:
cd yekhanfir-early-fire-detection-from-satelite-images-with-u-net
-
Create a conda environment using
environment.yml
:conda env create -f environment.yml
-
Activate the environment:
conda activate remote-sensing-segmentation
-
Install additional dependencies from
requirements.txt
using pip:pip install -r requirements.txt
The dataset used in this project is based on the California Burned Areas dataset, specifically the post-fire imagery. The dataset is loaded from the Hugging Face datasets
library using the following command:
dataset_post_fire = load_dataset("DarthReca/california_burned_areas", name="post-fire")
The training pipeline is implemented in the scripts/train.py file. The script loads the dataset, creates the training and validation DataLoader instances, and trains the U-Net model for the specified number of epochs.
- The dataset is split into training and validation sets.
- The model is trained using the Adam optimizer with a learning rate of 1e-3.
- For each batch, the model computes a loss using partial cross-entropy, focusing on labeled pixels.
- The model is evaluated after each epoch on the validation set.
- The model's state is saved after training completes.
To start training, run the following command:
scripts/train.py
The model is saved at the end of training as unet_model.pth.
Once the model has been trained, you can evaluate its performance using various metrics such as loss and Intersection over Union (IoU). The performance is reported both during training and validation.
- Loss: Cross-entropy loss on the labeled pixels.
- IoU: Intersection over Union score, used to evaluate the segmentation accuracy.