Follow subfolder README to setup and run YOLOv3 in TrainYourOwnYOLO
.
For example, used conda to create virtual environment yolov3-env
and pip install
in the environment to install requirements:
cd TrainYourOwnYOLO
conda create --name yolov3-env
conda activate yolov3-env
pip install -r requirements.txt
python Minimal_Example.py
Download the folder with trained YOLOv3 model from link below, unzip, and replace TrainYourOwnYOLO/Data/Model_Weights
with it.
The folder Activities.v5-activitiessetyolo.yolokeras is from the MPII human pose dataset. It is labeled with bounding boxes using Roboflow exported as Keras YOLO txt format.
Create a new folder TrainYourOwnYOLO/Data/Source_Images/Test_Images
.
Copy-paste test
folder images into it.
Assuming virtual environment was created named yolov3-env
.
Label poses of people in TrainYourOwnYOLO/Data/Source_Images/Test_Images
.
# run YOLOv3 to get images with pose labels
cd TrainYourOwnYOLO
conda activate yolov3-env
cd 3_Inference
python Detector.py
# combine images into a video and show it
python create_video_from_images.py
You should get the following output:
- detection results in
TrainYourOwnYOLO/Data/Source_Images/Test_Image_Detection_Results
- results combined in a video in
TrainYourOwnYOLO/video.avi
Graphs, video, and detected image results can be found in results folder.
View graphs in TensorBoard.dev:
Follow setup instructions and run.
The dataset Activities.v5-activitiessetyolo.yolokeras is used to train YOLOv3. If want to train manually, follow below steps:
- Copy-paste
train
folder intoTrainYourOwnYOLO/Data/Source_Images/Training_Images
- Rename the
train
folder tovott-csv-export
. - Rename
_classes.txt
todata_classes.txt
and move into the folderTrainYourOwnYOLO/Data/Model_Weights
. - Edit the script
prepend-absolute-path-to-data-train
, replacingABSOLUTE_PATH
with your test images folder absolute path. Run the script. This creates a new filedata_train.txt
and prepends the absolute path of the train images folder to each line of the file. - Delete
_annotations.txt
since it's no longer needed.
Then follow these training instructions.
These were the steps used to calculate Mean Average Precision on test images using this library.
- Copied
_annotations.txt
fromActivities.v5-activitiessetyolo.yolokeras/test
folder - Copied
Detection_Results.csv
fromTrainYourOwnYOLO/Data/Source_Images/Test_Image_Detection_Results
intomAP/script/extras
cd mAP/script/extras
-
Replaced all text in
mAP/script/extras/class_list
withActivities.v5-activitiessetyolo.yolokeras/test/_classes.txt
-
To create ground truth mAP library format, ran script:
python convert_keras-yolo3.py --gt _annotations.txt
Output files were created in mAP/scripts/extra/from_kerasyolo3/version_20210506002504
Copied these files into input/ground-truth/
- To create detection result mAP library format, ran script:
python create-detection-results-txt-from-csv.py
This created a text file using Detection_Results.csv in YOLO Keras format: detection_results.txt
Then ran script:
python convert_keras-yolo3.py --dr detection_results.txt
Output files were created in mAP/scripts/extra/from_kerasyolo3/version_20210506001906
Copied these files into input/detection-results/
- Ran script to intersect ground-truth and detection-results files, in case YOLO misses some images entirely:
python intersect-gt-and-dr.py
- Then ran mAP main script:
cd mAP
python main.py
Copied output folder to results folder.
- example Colab detecting person, getting box images, and running pose detector to label poses.