Skip to content

Latest commit

 

History

History
37 lines (35 loc) · 2.63 KB

README.md

File metadata and controls

37 lines (35 loc) · 2.63 KB

TrainDetectionModel

Training Naruto characters detctor with Tensorflow Object Detection API using Google Colab

Prepare training data

  • Use the RectLabel to bulid your own training dataset easyly on MAC OS(No alternative found on windows).
  • Extract sub-dataset from COCO Dataset(Needed category only)

Process Locally

  1. Get the XML_files(labels info) of each image
  2. Convert XML_files to CSV file

Process on Google Colab

  1. Convert images+CSV_file to tfrecord(Binary File)

Upload to your google driven

Using the ObjectDetectionAPI_Training_Naruto.ipynb(Please pay attention to the Path or folder name)

Configuring the development environment

Download the Google Object Detection API Library(Reference).

Download the finetuning checkpoint(Here used MobilenetV3+SSDLite model)

You can download the pretrained detection model in google object detection zoo

Modify the config file

Different Model correspond to different cofig_file(In "models/research/object_detection/samples/configs/")

  • [Note]: Can not modify the parameter of val.record, you may need to download and edit it locally with correct path of test dataset.

Begin training

Spend different amounts of time according to the specified number of training steps

  • [Note]: Interruption of the connection with remote google GPU may occurred, dont't worry about it, we have saved the training processing file in the training folder, we can ignore the interruption and continue training from the breakpoint

Training result view from Tensorboard

Export the frozen graph based on training checkpoint

Reference to my another repository(exportMobileNet_SSDSeries)

You can convert the exportd Frozen graph to the CoreML data format

Apply the detector model to your iOS APP(reference)