Skip to content

Simple Exercise to implement Forward and Reverse KL-Divergence minimization.

License

Notifications You must be signed in to change notification settings

kunalghosh/KLD_exercise

Repository files navigation

KLD_exercise

Simple Exercise to implement Forward and Reverse KL-Divergence minimization.

Here the Target distribution is P(X), is a mixture of gaussians and the Approximating distribution is Q(X) is a gaussian.

The experiment is implemented in 1D so that its easy to understand and visualize.

Intended Learning outcomes:

  1. Learn how to compute KL-Divergence [Used in Variational AutoEncoders (VAE) for example.]
  2. Understand what minimization of Forward and Reverse KL-Divergence does in practice.

About

Simple Exercise to implement Forward and Reverse KL-Divergence minimization.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages