This repository is a collection of Deep Learning models implemented with knowlegde from DeepLearning.ai Specialization on Coursera.
Implemented using Numpy vectorization.
Xavier initialization for network weights.
Activation Functions:
- Hidden layers : Leaky ReLU
- Output layer: Sigmoid
Includes brackpropagation using Gradient Descent or Adam optimization. The latter uses mini-batch.
Implemented with TensorFlow 1
Xavier initialization for network weights, Adam optimization and mini-batch.
Activation Functions:
- Hidden layers : Mish
- Output layer: Softmax
Implemented with Keras 2
Convolutional Layer:
- Zero Padding
- Batch Normalization
- ReLU Activation
- Max Pooling
Dense layer:
- Flatten
- Softmax
Reinforcement Learning heavily based on lufficc/dqn.
Intended to be used as an agent on OpenAI Gym.
The constructor of the models receive:
model ( layers_dimension, num_features, num_classes, learning_rate, num_iterations, beta1, beta2)
- layers_dimension is a list with the amount of units in each layer. e.g. [784, 800, 300, 10] is the dimension of a NN with a 1 input layer, 2 hidden layers and 1 output layer.
- beta1 and beta2 are Adam parameters
The main public methods of the models are:
fit ( training_inputs, training_labels, optimizer )
- Trains the Neural Network and saves it's parameters in a .npy filepredict ( test_inputs )
- Classify new dataget_accuracy ( test_inputs, test_labels, type )
- Get accuracy of predictions made on a labeled dataset
The included utility functions can fetch datasets from OpenML. The default dataset is:
The MNIST database is a large database of handwritten digits that contain 70,000 images.
Global requirements are Python 3.6+ and Virtualenv.
Configure the system by executing the following commands on the project's root folder:
virtualenv venv
source venv/bin/activate
pip install -r requirements.txt
When inside the virtual environment, the code can be run using python main.py