Skip to content

nomi30701/yolo-model-benchmark-onnxruntime-web

Repository files navigation

Yolo model Benchmark onnxruntime web

This is yolo model Benchmark, power by onnxruntime web.

Support WebGPU and wasm(cpu).

Test yolo model inference time in web.

Realtime Show inference time in Chart and Average time.

Models and Performance

Model Input Size Param.
YOLO11-N 640 2.6M
YOLO11-S 640 9.4M
YOLO11-M 640 20.1M
YOLOv10-N 640 2.3M
YOLOv10-S 640 7.2M
YOLOv9-T 640 2.0M
YOLOv9-S 640 7.1M
GELAN-S2 640
YOLOv8-N 640 3.2M
YOLOv8-S 640 11.2M

Setup

git clone https://github.com/nomi30701/yolo-model-benchmark-onnxruntime-web.git
cd yolo-model-benchmark-onnxruntime-web
yarn install # install dependencies

Scripts

yarn run dev # start dev server 

Use other YOLO model

  1. Conver YOLO model to onnx format. Read more on Ultralytics.
from ultralytics import YOLO

# Load a model
model = YOLO("yolo11n.pt")

# Export the model
model.export(format="onnx", opset=12)  
  1. Copy your yolo model to ./public/models folder. (Also can click Add model button)
  2. Add <option> HTML element in App.jsx, change value="YOUR_FILE_NAME", or Press "Add model" button.
    ...
    <option value="YOUR_FILE_NAME">CUSTOM-MODEL</option>
    <option value="yolov10n">yolov10n-2.3M</option>
    <option value="yolov10s">yolov10s-7.2M</option>
    ...
  3. select your model on page.
  4. DONE!👍

✨ Support Webgpu

For onnx format support Webgpu, export model set opset=12.