Skip to content

Commit

Permalink
add example notebooks and semantic segmentation model
Browse files Browse the repository at this point in the history
  • Loading branch information
Nico-Curti committed Nov 8, 2023
1 parent a5990f7 commit 5f11224
Show file tree
Hide file tree
Showing 22 changed files with 1,939 additions and 405 deletions.
5 changes: 4 additions & 1 deletion .github/workflows/python.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,9 @@ jobs:
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
# flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Install pygraphomics
- name: Install deepskin model
run: |
python -m pip install .
- name: Test the execution
run: |
deepskin --help
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -84,3 +84,4 @@ __version__.py
# img
img/logo_hr.png
img/*.pptx
docs/source/notebooks/**/*.png
2 changes: 1 addition & 1 deletion .readthedocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ sphinx:

# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.7
version: 3.9
install:
- requirements: requirements.txt
- requirements: docs/requirements.txt
Expand Down
58 changes: 55 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [0.0.1] - 2023-11-02
## [0.0.1] - 2023-11-08

First version of the library.
This is the starting point of the development of the *deepskin* package.
Expand All @@ -18,13 +18,65 @@ Further improvements will occur in the next versions.
- :globe_with_meridians: [Global] Add first version of Github Actions CI for Python and Docs

- :computer: [Python] Installation file for the Python package
- :computer: [Python] Entry point of the library for its usage with command line interface (ref. `deepskin/__main__.py`)
- :computer: [Python] Automated versioning of the library via setup installation
- :computer: [Python] Definition of deepskin feature classes and statistics
- :computer: [Python] Insert model checkpoint download
- :computer: [Python] First version of the entire feature extraction module
- :computer: [Python] Move from binary to semantic segmentation using latest version of Deepskin db
- :computer: [Python] Add command line usage for PWAT estimation
- :computer: [Python] Add command line usage for image segmentation

- :construction: [Features] First list of `deepskin` features:
* **Feature**:
Text.
* **Color features**:
We extracted the average and standard deviation of *RGB* channels for each wound and peri-wound segmentation.
This set of measures aims to quantify the appearance of the wound area in terms of redness and color heterogeneity.

We converted each masked image into the corresponding *HSV* color space. For each channel, we extracted the average and standard deviation values.
The *HSV* color space is more informative than the *RGB* one since it takes care of different light exposition (saturation).
In this way, we monitored the various conditions in which the images were acquired.

Both these two sets of features aim to quantify the necrotic tissue components of the wounds.
The necrotic tissue, indeed, could be modeled as a darker component in the wound/peri-wound area, which alters the average color of the lesion.
The *Necrotic Tissue type* and the *Total Amount of Necrotic Tissue* involve 2/8 items in the PWAT estimation.

* **Redness features**:
The primary information on the healing stage of a wound can be obtained by monitoring its redness (erythema) compared to the surrounding area.
Several redness measurements are proposed in literature, belonging to different medical fields and applications.
We extracted two measures of redness.

The first measure was proposed by Park et al. 1_, and involves a combination of the *RGB* channels, i.e.,

$$
Redness_\{RGB} = 1/n \sum_{i=1}^n \frac{(2R_i - G_i - B_i)}{(2 \times (R_i + G_i + B_i))}
$$

Where *R*, *G*, and *B* are the red, green, and blue channels of the masked image, respectively, the *n* value represents the number of pixels in the considered mask.
This measure emphasizes the R intensity using a weighted combination of the three *RGB* channels.

The second measure was proposed by Amparo et al. 2_, and involves a combination of the *HSV* channels, i.e.,

$$
Redness_\{HSV} = 1/n \sum_{i=1}^n H_i \times S_i
$$

where *H* and *S* represent the hue and saturation intensities of the masked image, respectively.
This measure tends to be more robust against different image light expositions.

Both these features were extracted on the wound and peri-wound areas independently.
Redness estimations could help to quantify the *Peri-ulcer Skin Viability*, *Granulation Tissue Type*, and *Necrotic Tissue Type*, which represent 3/8 items involved in the PWAT estimation.

* **Morphological features**:
We measured the morphological and textural characteristics of the wound and peri-wound areas by computing the 13 Haralick's features.
Haralick's features are becoming standard texture descriptors in multiple medical image analyses, especially in the Radiomic research field.
This set of features was evaluated on the grey-level co-occurrence matrix (GLCM) associated with the grayscale versions of the original images, starting from the areas identified by our segmentation models.
We computed the 13 standard Haralick's features, given by energy, inertia, entropy, inverse difference moment, cluster shade, and cluster prominence.
Using textural elements, we aimed to quantify information related to the *Granulation Tissue types* and *Amount of Granulation Tissue*, which are 2/8 items of the total PWAT score.

- :closed_book: [Docs] First version of the README instructions
- :closed_book: [Docs] First version of the Sphinx documentation
- :closed_book: [Docs] First version of the Read-the-Docs documentation
- :closed_book: [Docs] List of notebook examples in the sphinx documentation
- :closed_book: [Docs] Notebook example for deepskin package features
- :closed_book: [Docs] Notebook example for ASSL training model
- :closed_book: [Docs] Notebook example for PWAT estimation
11 changes: 10 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,15 @@ Official implementation of the deepskin algorithm published on [International Jo
* [Acknowledgment](#acknowledgment)
* [Citation](#citation)

### :tada: Important updates :tada

With the new version of the Deepskin dataset, we improved the segmentation model providing the possibility to perform a multi-class semantic segmentation!
Using the `deepskin` package you can directly use the latest version of the model, obtaining for each image a semantic segmentation according to the following classes:

* ![#f03c15](https://placehold.co/15x15/f03c15/f03c15.png) wound ROI
* ![#c5f015](https://placehold.co/15x15/c5f015/c5f015.png) patient body ROI
* ![#1589F0](https://placehold.co/15x15/1589F0/1589F0.png) background ROI

## Overview

The `deepskin` package aims to propose a fully automated pipeline for the wound-image processing
Expand Down Expand Up @@ -135,7 +144,7 @@ A complete list of beginner-examples for the build of a custom `deepskin` pipeli
For sake of completeness, a simple `deepskin` pipeline could be obtained by the following snippet:

```python
from deepskin import deepskin_model
from deepskin import wound_segmentation
from deepskin import evaluate_PWAT_score

# load the image in any OpenCV supported fmt
Expand Down
7 changes: 7 additions & 0 deletions SOURCES.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,10 @@ setup.py
deepskin/__init__.py
deepskin/__version__.py
deepskin/__main__.py
deepskin/checkpoints.py
deepskin/constants.py
deepskin/features.py
deepskin/imgproc.py
deepskin/model.py
deepskin/pwat.py
deepskin/segmentation.py
6 changes: 6 additions & 0 deletions deepskin/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,12 @@
from .__version__ import __version__
# import the segmentation model
from .model import deepskin_model
# import useful constant values
from .model import MODEL_CHECKPOINT
# import model checkpoint getter
from .checkpoints import download_model_weights
# import the wound segmentation algorithm
from .segmentation import wound_segmentation
# import the features for the wound monitoring
from .features import evaluate_features
# import the PWAT evaluator for the wound scoring
Expand Down
63 changes: 51 additions & 12 deletions deepskin/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,26 @@ def parse_args ():
help='Enable/Disable the code logging',
)

# deepskin --mask
parser.add_argument(
'--mask', '-m',
dest='mask',
required=False,
action='store_true',
default=False,
help='Evaluate the semantic segmentation mask using the Deepskin model; the resulting mask will be saved to a png file in the same location of the input file',
)

# deepskin --pwat
parser.add_argument(
'--pwat', '-p',
dest='pwat',
required=False,
action='store_true',
default=False,
help='Compute the PWAT score of the given wound-image',
)

args = parser.parse_args()

return args
Expand Down Expand Up @@ -139,19 +159,38 @@ def main ():
flush=True
)

# get the wound segmentation mask
wound_mask = wound_segmentation(
img=rgb,
tol=0.5,
verbose=args.verbose,
)
if args.mask or args.pwat:
# get the semantic segmentation mask
mask = wound_segmentation(
img=rgb,
tol=0.5,
verbose=args.verbose,
)
# dump the resulting mask to file

# get the output directory
outdir = os.path.dirname(args.filepath)
# get the filename
name = os.path.basename(args.filepath)
# remove extension
name, _ = os.path.splitext(name)
# build the output filename
outfile = f'{outdir}/{name}_deepskin_mask.png'
# dump the mask
cv2.imwrite(outfile, mask)

if args.pwat:
# compute the wound PWAT
pwat = evaluate_PWAT_score(
img=rgb,
mask=mask,
verbose=args.verbose,
)

# compute the wound PWAT
pwat = evaluate_PWAT_score(
img=rgb,
wound_mask=wound_mask,
verbose=args.verbose,
)
print(f'{GREEN_COLOR_CODE}PWAT prediction: {pwat:.3f}{RESET_COLOR_CODE}',
end='\n',
flush=True,
)


if __name__ == '__main__':
Expand Down
146 changes: 146 additions & 0 deletions deepskin/checkpoints.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-

import os
# download model weights
import requests
from zipfile import ZipFile
from time import time as now

# constant values
from .constants import CRLF
from .constants import IMG_SIZE
from .constants import RESET_COLOR_CODE
from .constants import GREEN_COLOR_CODE

__author__ = ['Nico Curti']
__email__ = ['nico.curti2@unibo.it']

__all__ = [
'download_model_weights',
]

def download_file_from_google_drive (Id : int,
destination : str,
total_length : int,
):
'''
Download file from google drive page.
Parameters
----------
Id : int
File Id in Google Drive page
destination : str
Destination path of the download
total_lenght : int
File dimension in bytes
Returns
-------
None
Notes
-----
..note::
The file Id can be extracted from the google drive page when the file is shared.
The total length is useful only for graphics.
'''

url = 'https://docs.google.com/uc?export=download&confirm=1'

def get_confirm_token (response):
'''
Check token validity.
'''
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value

return None

def save_response_content (response, destination):
'''
Download the file chunk by chunk and plot the progress
'''
chunk_size = 32768
with open(destination, 'wb') as fp:
dl = 0
start = now()
download = now()

for chunk in response.iter_content(chunk_size):

dl += len(chunk)
done = int(50 * dl / total_length)
progress = "█" * done + " " * (50 - done)
perc = int(dl / total_length * 100)
mb = dl / 1000000
print((
f'{CRLF}Downloading Deepskin model ... '
f'|{progress}| {perc:.0f}% ({mb:.1f} Mb) {now() - start:<3.1f} sec'
),
end='',
flush=True
)
download = now()

if chunk: # filter out keep-alive new chunks
fp.write(chunk)

print(f'{CRLF}Downloading Deepskin model ... {GREEN_COLOR_CODE}[DONE]{RESET_COLOR_CODE}',
end='\n',
flush=True,
)

session = requests.Session()
response = session.get(
url,
params={'id' : Id},
stream=True
)
token = get_confirm_token(response)

if token:
params = {
'id' : Id,
'confirm' : token
}
response = session.get(
url,
params=params,
stream=True
)

save_response_content(response, destination)


def download_model_weights (Id : str, model_name : str):
'''
Download the model files from google-drive repository
and unpack the files in the 'data' directory.
'''

print (f'Downloading Deepskin model ... ', end='', flush=True)
download_file_from_google_drive(
Id=Id,
destination=f'{model_name}.zip',
total_length=66262365
)
print ('Extracting files ... ', end='', flush=True)

with ZipFile(f'{model_name}.zip') as zipper:
zipper.extractall('.')

print (f'{GREEN_COLOR_CODE}[DONE]{RESET_COLOR_CODE}')

local = os.path.dirname(os.path.abspath(__file__))
outdir = os.path.join(local, '..', 'checkpoints')
os.makedirs(outdir, exist_ok=True)

os.rename(f'{model_name}.h5', os.path.join(outdir, f'{model_name}.h5'))
os.remove(f'{model_name}.zip')

return
Loading

0 comments on commit 5f11224

Please sign in to comment.