Table of Contents
The main objective of the DeepRun postdoctoral project is to develop multi-scale image recognition tools, using Deep Learning algorithms, applied to the estimation of biomass resources for bioenergy production, and to the reliability of hydrogen converters for energy storage optimization.
Here are the main features:
- Model generation: a model instance is generated from a source file or a set of fixed parameters
- Model training: a training phase can be performed from a given dataset and generated model
- Image inference: image segmentation is done with the trained model and high-resolution imagery
The application's functionalities are designed to cover the specific needs of the project. As such, they are not exhaustive and will be updated on a regular basis.
Use the README.md
to get started.
List of main packages required by the application:
Here is an example of instructions for setting up your project locally. To get a local copy up and running, follow these simple steps:
This is an example of how to list things you need to use the software and how to install them.
-
Install GPU drivers, CUDA and CUDNN for accelerated computations.
-
Create and load a virtual environment
sudo apt install python3-virtualenv
python3 -m venv venv
source ./venv/bin/activate
#'deactivate' to close the virtual environment
- Install GDAL and Python bindings
sudo apt-get install python<PYTHON VERSION>-dev
sudo add-apt-repository ppa:ubuntugis/ppa && sudo apt-get update
sudo apt-get update
sudo apt-get install gdal-bin
sudo apt-get install libgdal-dev
#Include these export into .bashrc
export CPLUS_INCLUDE_PATH=/usr/include/gdal
export C_INCLUDE_PATH=/usr/include/gdal
ogrinfo --version
pip install GDAL==<GDAL VERSION FROM OGRINFO>
or using gdal-config
:
sudo apt-get install pythonx.x-dev
sudo add-apt-repository ppa:ubuntugis/ppa && sudo apt-get update
sudo apt-get install libgdal-dev gdal-bin
export CPLUS_INCLUDE_PATH=$(gdal-config --cflags | sed 's/-I//')
export C_INCLUDE_PATH=$(gdal-config --cflags | sed 's/-I//')
pip install GDAL==$(gdal-config --version)
- Install miscellaneous packages
pip install flask
pip install tensorflow
pip install scikit-learn
pip install matplotlib
pip install rasterio
pip install tqdm
Setting up an environment does not ensure reproducibility of results, as this will depend on hardware, compilers and operating software. One solution is to deploy containers. In particular here, using Apptainer software (previously known as Singularity), which enables application-level virtualization without the need of administrator rights.
- Assuming apptainer is installed (please contact the administrator), build the container
apptainer build container/le2p.sif container/le2p.def
- Running as a shell without administrator rights
# --nv flag for GPU access
# --bind flag if working space is outside of home directory
apptainer shell --nv --bind /<Origin directory>/:/<Destination directory>/ container/le2p.sif
...
Here is an example of how install and set up the DeepRun application.
- Complete the prerequisites installation
# GPU driver/cuda/cudnn installation and set up the development environment
# or run on container
- Clone the repo
git clone https://github.com/LE2P/DeepRun-GUI.git
- Navigate to the project directory and start the application
cd Deep-API
flask run
Click on Model section. Two options are available, generation of a deep model from source file or a set of parameters.
- Click on 'Browse' button and select a save model.
- Status message
- Click on 'Generate model'
- Model summary message
Click on Train section.
- Click on 'MODEL ?' button.
- Status message
- Click on 'Browse X dataset' button and select the input dataset.
- Import X dataset message
- Click on 'Browse Y dataset' button and select the targer dataset.
- Import Y dataset message
- Click on 'START TRAINING' button.
- Training summary message
Click on Inference section.
- Click on prefered model selection.
- Model message
- Click on 'Browse' image button and select the input dataset.
- Import image message
- Click on 'Process Image' button.
- Inference summary message
- Add default pre-trained model
- Add sample data
- Add default dataset
- Add others segmentation model
- Add image bubbles tracking
- Add Changelog
- Add Additional Examples
- Correction of search functionality
- Multi-language Support
- French
- Créole
- Chinese
- Spanish
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See LICENSE.txt
for more information.
Christophe LIN-KWONG-CHON - christophe.lin-kwong-chon@univ-reunion.fr
Mathieu DELSAUT - mathieu.delsaut@univ-reunion.fr
Project link : DeepRun
This work, registered as the “DeepRun” project, was supported partly by the European Union through the European Regional Development Fund (under Grant 2014FR16RFOP007), by the Reunion Island Region (under Grant GURDTI/20210802-0030854) and by University of Reunion Island.