currently achieve through (raspberry pi pico rp2040) micro-controller connected to the host computer, acting as a USB HID. The source (.ino) and compiled (.uf2) can be found in the ./microController directory. By default it will open its own webserver via ad-hoc ap mode. You can choose to connect to an existing wlan aswell. The device connected to and/or requesting the micro-controller http server can be a smartphone running the model, this way the host pc getting its cursor controlled is oblivious to the fact. Once the model is loaded everything works without internet connection (cached offline not fully there yet).
Try it at https://shubinwang.com/detect (THIS VERSION WONT MOUSE JACK 😅)
- Move your cursor autonomously through video (live or pre-recorded)!
- Any kernel anticheats on host pc stays oblivious.
- os agnostic
- fully offline
- semi-install-less
Absolute Mouse | Relative Mouse | |
---|---|---|
AP Mode | AP + Absolute | AP + Relative (DEFAULT UF2) |
STA Mode | STA + Absolute | STA + Relative |
This project was originally inspired by https://github.com/RootKit-Org/AI-Aimbot. Moreover, my earlier attempts was based on extending https://github.com/Hyuto/yolov8-onnxruntime-web to use the webgpu runtime and extending the functionality (webcam or pre-recording). Later on I switched the onnxruntime in favor of Google's MediaPipe Object Detector offering efficentDet-Lite and mobileNet models through tensorflow.js instead of Yolo ran through onnxruntime-web.
The original version of this project based on modifying Hyuto/yolov8-onnxruntime-web is available in ./extras/deprecated/ as a standalone you'll need to get yolo and the nms model yourself see additional readme
The past progress videos using the onnxruntime-web in ./extras/media