Object detection on iOS mobile device with Vision and Core ML(mlmodel)
Core ML supports Vision for analyzing images, Natural Language for processing text, Speech for converting audio to text, and Sound Analysis for identifying sounds in audio. Core ML itself builds on top of low-level primitives like Accelerate and BNNS, as well as Metal Performance Shaders,optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption.
- Models from Core ML research community
- Models trained on Create ML
- Models transformed from tensorflow format
Coding Process(Detection Reference)
- Set Up Live Capture
- Initialize Request(Make a request)
- VNImageRequestHandler(Handle the request)
- CompletionHandler(Process the results)