Skip to content

Latest commit

 

History

History
120 lines (96 loc) · 4.71 KB

README.md

File metadata and controls

120 lines (96 loc) · 4.71 KB

CMPE_258_Group_Project

Setup

YOLOv3 pose detector

Follow subfolder README to setup and run YOLOv3 in TrainYourOwnYOLO. For example, used conda to create virtual environment yolov3-env and pip install in the environment to install requirements:

cd TrainYourOwnYOLO
conda create --name yolov3-env
conda activate yolov3-env
pip install -r requirements.txt
python Minimal_Example.py
Download trained YOLOv3 model

Download the folder with trained YOLOv3 model from link below, unzip, and replace TrainYourOwnYOLO/Data/Model_Weights with it.

Dataset

The folder Activities.v5-activitiessetyolo.yolokeras is from the MPII human pose dataset. It is labeled with bounding boxes using Roboflow exported as Keras YOLO txt format.

Create a new folder TrainYourOwnYOLO/Data/Source_Images/Test_Images. Copy-paste test folder images into it.

Run

Assuming virtual environment was created named yolov3-env. Label poses of people in TrainYourOwnYOLO/Data/Source_Images/Test_Images.

# run YOLOv3 to get images with pose labels
cd TrainYourOwnYOLO
conda activate yolov3-env
cd 3_Inference
python Detector.py
# combine images into a video and show it
python create_video_from_images.py

Output

You should get the following output:

  • detection results in TrainYourOwnYOLO/Data/Source_Images/Test_Image_Detection_Results
  • results combined in a video in TrainYourOwnYOLO/video.avi

Results

Graphs, video, and detected image results can be found in results folder.

View graphs in TensorBoard.dev:


Development Instructions

Follow setup instructions and run.

Training

The dataset Activities.v5-activitiessetyolo.yolokeras is used to train YOLOv3. If want to train manually, follow below steps:

Copy train files and images to YOLOv3 folder
  1. Copy-paste train folder into TrainYourOwnYOLO/Data/Source_Images/Training_Images
  2. Rename the train folder to vott-csv-export.
  3. Rename _classes.txt to data_classes.txt and move into the folder TrainYourOwnYOLO/Data/Model_Weights.
  4. Edit the script prepend-absolute-path-to-data-train, replacing ABSOLUTE_PATH with your test images folder absolute path. Run the script. This creates a new file data_train.txt and prepends the absolute path of the train images folder to each line of the file.
  5. Delete _annotations.txt since it's no longer needed.

Then follow these training instructions.

Calculate mAP:

These were the steps used to calculate Mean Average Precision on test images using this library.

  1. Copied _annotations.txt from Activities.v5-activitiessetyolo.yolokeras/test folder
  2. Copied Detection_Results.csv from TrainYourOwnYOLO/Data/Source_Images/Test_Image_Detection_Results into mAP/script/extras

cd mAP/script/extras

  1. Replaced all text in mAP/script/extras/class_list with Activities.v5-activitiessetyolo.yolokeras/test/_classes.txt

  2. To create ground truth mAP library format, ran script:

python convert_keras-yolo3.py --gt _annotations.txt

Output files were created in mAP/scripts/extra/from_kerasyolo3/version_20210506002504

Copied these files into input/ground-truth/

  1. To create detection result mAP library format, ran script:
python create-detection-results-txt-from-csv.py

This created a text file using Detection_Results.csv in YOLO Keras format: detection_results.txt Then ran script:

python convert_keras-yolo3.py --dr detection_results.txt

Output files were created in mAP/scripts/extra/from_kerasyolo3/version_20210506001906

Copied these files into input/detection-results/

  1. Ran script to intersect ground-truth and detection-results files, in case YOLO misses some images entirely:
python intersect-gt-and-dr.py
  1. Then ran mAP main script:
cd mAP
python main.py

Copied output folder to results folder.

More info:
  • example Colab detecting person, getting box images, and running pose detector to label poses.

Architecture

architecture diagram

References