Skip to content

Latest commit

 

History

History
17 lines (17 loc) · 1.34 KB

README.md

File metadata and controls

17 lines (17 loc) · 1.34 KB

ObjectDetection_iOS

Object detection on iOS mobile device with Vision and Core ML(mlmodel)

Shibuya Scramble Crossing Live Camera

Test on iphone 8

Core ML Framework


Core ML supports Vision for analyzing images, Natural Language for processing text, Speech for converting audio to text, and Sound Analysis for identifying sounds in audio. Core ML itself builds on top of low-level primitives like Accelerate and BNNS, as well as Metal Performance Shaders,optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption.

Core ML Models

Coding Process(Detection Reference

  1. Set Up Live Capture
  2. Initialize Request(Make a request)
  3. VNImageRequestHandler(Handle the request)
  4. CompletionHandler(Process the results)