YOLOv8 using TensorRT accelerate !
You can dowload the onnx model which is pretrained by https://github.com/ultralytics .
You can export TensorRT engine by build.py
.
Usage:
python3 build.py --onnx yolov8s_nms.onnx --device cuda:0 --fp16
--onnx
: The ONNX model you download.--device
: The CUDA deivce you export engine .--half
: Whether to export half-precision model.
You can export TensorRT engine by trtexec
tools.
Usage:
/usr/src/tensorrt/bin/trtexec --onnx=yolov8s_nms.onnx --saveEngine=yolov8s_nms.engine --fp16
If you installed TensorRT by a debian package, then the installation path of trtexec
is /usr/src/tensorrt/bin/trtexec
If you installed TensorRT by a tar package, then the installation path of trtexec
is under the bin
folder in the
path you decompressed
You can infer images with the engine by infer.py
.
Usage:
python3 infer.py --engine yolov8s_nms.engine --imgs data --show --out-dir outputs --device cuda:0
-
--engine
: The Engine you export. -
--imgs
: The images path you want to detect. -
--show
: Whether to show detection results. -
--out-dir
: Where to save detection results images. It will not work when use--show
flag. -
--device
: The CUDA deivce you use. -
--profile
: Profile the TensorRT engine.
If you want to profile the TensorRT engine:
Usage:
python3 infer.py --engine yolov8s_nms.engine --profile