Skip to content

Collaborating with Rasyeedah binti Mohd Othman, this project involves training a CNN model with a dataset of hand movement pose. Real time predicted pose will then used to control wheelchair movement.

License

Notifications You must be signed in to change notification settings

AgungHari/Smart-Wheelchair-Control-Based-on-Spatial-Features-of-Hand-Gesture

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

banner3

Smart Wheelchair Control Based on Spatial Features of Hand Gesture.

Scikit-learn version Keras version matplotlib version MediaPipe version Tensorflow version OpenCV version IPyKernel version License

Collaborating with Rasyeedah binti Mohd Othman, this project involves training a CNN model with a dataset of hand movement pose. Real time predicted pose will then used to control wheelchair movement.

Project Result

The testing was conducted at Tower 2 ITS, using human gesture samples shown in the video on the left. In the video on the right, testing was performed to evaluate the success of class invocation for each gesture.

agungyolo agungyolo

YouTube

Based on the test results, the following conclusions can be drawn:

Test FPS

  • The wheelchair can be controlled using the user’s hand gestures, captured by a camera mounted on the electric wheelchair. During testing, an average frame rate (FPS) of 6.3866 was recorded over 30 trials, with the highest FPS reaching 6.93 and the lowest at 5.83. This data demonstrates that the system operates consistently on the device used.
  • Further testing was conducted to evaluate the model's ability to detect and predict different gesture classes from a new user's pose. For the "TanganKanan" class, a success rate of 94% was achieved with a 6% failure rate, while the "TanganKiri" class had a 95% success rate and a 5% failure rate. The "Berhenti" class showed a 96% success rate with a 4% failure rate. For the "Maju" class, the system achieved a 92% success rate and an 8% failure rate, while the "Mundur" class had a success rate of 93% and a failure rate of 7%.

Installation

Please use seperate folder for training and control. venv setup for training

  python --version
  python -m venv nama_venv
  nama_venv\Scripts\activate
  pip install opencv-python
  pip install mediapipe
  pip install numpy
  pip install matplotlib
  pip install tensorflow

The training file include process of collecting dataset. Please modify for each class image data folder. Example :

  CreateDataSet(0, "Berhenti", DirektoriDataSet)

Actually, you need an ESP32 and the wheelchair to run it.

  python --version
  python -m venv nama_venv
  nama_venv\Scripts\activate
  pip install mediapipe
  pip install opencv-python

Contributing

I am open to contributions and collaboration. If you would like to contribute, please create a pull request or contact me directly!

  • Fork this repo.
  • Create a new feature branch:
git checkout -b new-feature
  • Commit your changes.
git commit -m "ver..."
  • Push to the branch:
git push origin new-feature

Features

  • Optimized hand gestures for controlling the wheelchair.
  • A lightweight and user-friendly system.

LOGO

Authors

Static Badge

Static Badge

About

Collaborating with Rasyeedah binti Mohd Othman, this project involves training a CNN model with a dataset of hand movement pose. Real time predicted pose will then used to control wheelchair movement.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published