Tengine is a lite, high performance, modular inference engine for embedded device
-
Updated
Mar 6, 2025 - C++
Tengine is a lite, high performance, modular inference engine for embedded device
An open-source project for Windows developers to learn how to add AI with local models and APIs to Windows apps.
Efficient Inference of Transformer models
Free TPU for FPGA with compiler supporting Pytorch/Caffe/Darknet/NCNN. An AI processor for using Xilinx FPGA to solve image classification, detection, and segmentation problem.
Samples code for world class Artificial Intelligence SoCs for computer vision applications.
FREE TPU V3plus for FPGA is the free version of a commercial AI processor (EEP-TPU) for Deep Learning EDGE Inference
Easy usage of Rockchip's NPUs found in RK3588 and similar chips
Ollama alternative for Rockchip NPU: An efficient solution for running AI and Deep learning models on Rockchip devices with optimized NPU support ( rkllm )
hardware design of universal NPU(CNN accelerator) for various convolution neural network
High-speed and easy-use LLM serving framework for local deployment
YoloV5 NPU for the RK3566/68/88
Simplified AI runtime integration for mobile app development
YoloV8 NPU for the RK3566/68/88
Add a description, image, and links to the npu topic page so that developers can more easily learn about it.
To associate your repository with the npu topic, visit your repo's landing page and select "manage topics."