This repository features a machine learning model trained on the Pima Indians Diabetes dataset, which contains various medical attributes of female patients. The aim of this project is to accurately predict the likelihood of diabetes using attributes such as glucose level, blood pressure, body mass index (BMI), age, and more. The model employs established machine learning techniques to analyze these features and provide insights into diabetes risk, making it a valuable tool for healthcare professionals and researchers in understanding diabetes prevalence.
- Project Overview
- Dataset Description
- Installation Instructions
- Model and Techniques
- Results and Performance
- Improvements
- Usage Instructions
- Future Work
This project aims to build a predictive model for diabetes using the Pima Indians Diabetes dataset. The model helps in identifying individuals at risk of diabetes based on medical attributes such as age, BMI, insulin levels, and more.
The Pima Indians Diabetes dataset is sourced from the National Institute of Diabetes and Digestive and Kidney Diseases. It includes data on 768 female patients of Pima Indian heritage, with the following attributes:
- Pregnancies
- Glucose
- Blood Pressure
- Skin Thickness
- Insulin
- BMI
- Diabetes Pedigree Function
- Age
- Outcome (0 or 1, indicating the absence or presence of diabetes)
To run this project, ensure you have Python installed along with Jupyter Notebook. You'll also need the following libraries:
- pandas
- numpy
- scikit-learn
- matplotlib
- seaborn
- imblearn
To install these libraries, you can use pip:
pip install pandas numpy scikit-learn matplotlib seaborn imblearn
This project utilizes a Naive Bayes classifier to predict diabetes. Key steps include:
- Data pre-processing: Handling missing values, detecting outliers, data normalization and balancing, feature selection to identify the most significant attributes, and splitting the data into training and testing sets.
- Model training: Using a Naive Bayes classifier to train the model on the training set.
- Model evaluation: Assessing the model's performance using accuracy, precision, recall, F1-score, classification report, confusion matrix, and AUC score.
The model achieved an accuracy of 78% on the test set. Key performance metrics include:
- Precision: 0.63
- Recall: 0.56
- F1-score: 0.59
- AUC score: 0.71
Detailed performance metrics and visualizations are available in the results section of the repository.
In this project, several improvements have been made to enhance the data preprocessing and model performance processes.
- KNN Imputation for Missing Values.
- PCA Method for Feature Selection.
In this update, I have added a new method for handling missing values using KNN Imputation. This method improves the data preprocessing step by providing a more robust way to impute missing values based on the nearest neighbors. The benefits of KNN Imputation are:
- Preservation of Relationships: KNN considers the similarity of data points, which helps maintain the relationships between features when imputing missing values.
- Improved Model Performance: Proper handling of missing data can lead to enhanced model accuracy and reliability.
A key aspect of this analysis is feature selection, which in this update, I employ Principal Component Analysis (PCA) to reduce the number of variables and retain the most relevant information.
Principal Component Analysis (PCA) is a dimensionality reduction technique. It transforms the original variables into a new set of uncorrelated variables called principal components, ordered by the amount of variance they capture from the data. By using PCA, we can efficiently eliminate less important features, enhancing model interpretability and performance.
To use this project, clone the repository using the following command in your terminal or command prompt:
- Clone the repository:
git clone https://github.com/Ehsan-Behzadi/A-Machine-Learning-Approach-Using-the-Pima-Indians-Diabetes-Dataset.git cd A-Machine-Learning-Approach-Using-the-Pima-Indians-Diabetes-Dataset
Next, open the Jupyter Notebook file (usually with a .ipynb extension) using Jupyter Notebook.
- To start Jupyter Notebook:
jupyter notebook
Future improvements and directions for this project include:
- Exploring other classification algorithms such as Random Forest, K-Nearest Neighbors and more.
- Hyperparameter tuning to optimize model performance.
- Incorporating additional features to enhance prediction accuracy.