This project presents a Python implementation of the gradient descent algorithm for multi-linear regression. Designed to handle problems with 'r' predictors, it allows customization of the learning rate (η) and the number of iteration steps. The implementation is tested on two datasets: advertising.csv
and auto.csv
, using a suitable train-test split to evaluate the model's performance.
- Implements gradient descent for multi-linear regression problems.
- Customizable learning rate and iteration steps.
- Evaluation using cost function and R-squared test.
- Tested on real-world datasets (
advertising.csv
andauto.csv
). - Visualization tools for analyzing regression results.
- Python environment
- Libraries: NumPy, Matplotlib, Seaborn (optional), Scikit-learn.
Clone the repository to your local machine:
git clone [repository-url]
- Navigate to the project directory.
- Open the provided Jupyter notebooks (
main_advertising.ipynb
andmain_auto.ipynb
) to see the implementation on the respective datasets. - Modify the parameters (learning rate, iterations, test size) in the
Model
class instantiation as needed.
Two Jupyter notebooks are provided:
advertising_analysis.ipynb
for theadvertising.csv
dataset.auto_analysis.ipynb
for theauto.csv
dataset.
These notebooks guide you through the process of loading the data, creating an instance of the Model
class, running the regression analysis, and visualizing the results.