Simple Exercise to implement Forward and Reverse KL-Divergence minimization.
Here the Target distribution is P(X), is a mixture of gaussians and the Approximating distribution is Q(X) is a gaussian.
The experiment is implemented in 1D so that its easy to understand and visualize.
Intended Learning outcomes:
- Learn how to compute KL-Divergence [Used in Variational AutoEncoders (VAE) for example.]
- Understand what minimization of Forward and Reverse KL-Divergence does in practice.