This is yolo model Benchmark, power by onnxruntime web.
Support WebGPU and wasm(cpu).
Test yolo model inference time in web.
Realtime Show inference time in Chart and Average time.
Model | Input Size | Param. |
---|---|---|
YOLO11-N | 640 | 2.6M |
YOLO11-S | 640 | 9.4M |
YOLO11-M | 640 | 20.1M |
YOLOv10-N | 640 | 2.3M |
YOLOv10-S | 640 | 7.2M |
YOLOv9-T | 640 | 2.0M |
YOLOv9-S | 640 | 7.1M |
GELAN-S2 | 640 | |
YOLOv8-N | 640 | 3.2M |
YOLOv8-S | 640 | 11.2M |
git clone https://github.com/nomi30701/yolo-model-benchmark-onnxruntime-web.git
cd yolo-model-benchmark-onnxruntime-web
yarn install # install dependencies
yarn run dev # start dev server
- Conver YOLO model to onnx format. Read more on Ultralytics.
from ultralytics import YOLO
# Load a model
model = YOLO("yolo11n.pt")
# Export the model
model.export(format="onnx", opset=12)
- Copy your yolo model to
./public/models
folder. (Also can clickAdd model
button) - Add
<option>
HTML element inApp.jsx
, changevalue="YOUR_FILE_NAME"
, or Press "Add model" button.... <option value="YOUR_FILE_NAME">CUSTOM-MODEL</option> <option value="yolov10n">yolov10n-2.3M</option> <option value="yolov10s">yolov10s-7.2M</option> ...
- select your model on page.
- DONE!👍
✨ Support Webgpu
For onnx format support Webgpu, export model set
opset=12
.