This module provides functions for performing Expectation-Maximization, both offline, online and with step-wise algorithms
|
template<uint32 N, uint32 NCOMPONENTS> |
CUGAR_HOST_DEVICE void | cugar::EM (Mixture< Gaussian_distribution_2d, NCOMPONENTS > &mixture, const float eta, const Vector2f *x, const float *w) |
|
template<uint32 NCOMPONENTS> |
CUGAR_HOST_DEVICE void | cugar::joint_entropy_EM (Mixture< Gaussian_distribution_2d, NCOMPONENTS > &mixture, const float eta, const Vector2f x, const float w=1.0f) |
|
template<uint32 NCOMPONENTS> |
CUGAR_HOST_DEVICE void | cugar::joint_entropy_EM (Mixture< Gaussian_distribution_2d, NCOMPONENTS > &mixture, const float eta, const uint32 N, const Vector2f *x, const float *w) |
|
template<uint32 NCOMPONENTS> |
CUGAR_HOST_DEVICE void | cugar::stepwise_E (Mixture< Gaussian_distribution_2d, NCOMPONENTS > &mixture, const float eta, const Vector2f x, const float w, Matrix< float, NCOMPONENTS, 8 > &u) |
|
template<uint32 NCOMPONENTS> |
CUGAR_HOST_DEVICE void | cugar::stepwise_M (Mixture< Gaussian_distribution_2d, NCOMPONENTS > &mixture, const Matrix< float, NCOMPONENTS, 8 > &u, const uint32 N) |
|
◆ EM()
template<uint32 N, uint32 NCOMPONENTS>
Traditional Expectation Maximization for a statically-sized batch. This algorithm can also be called over and over to refine the same mixture in step-wise fashion, on fixed size mini-batches.
- Template Parameters
-
N | batch size |
NCOMPONENTS | number of mixture components |
- Parameters
-
mixture | input/output mixture to be learned |
eta | learning factor, typically in the range [1,2) |
x | samples |
w | sample weights |
◆ joint_entropy_EM() [1/2]
template<uint32 NCOMPONENTS>
Online joint-entropy Expectation Maximization, as described in:
Batch and On-line Parameter Estimation of Gaussian Mixtures Based on the Joint Entropy, Yoram Singer, Manfred K. Warmuth
and further extended to support importance sampling weights w.
- Template Parameters
-
NCOMPONENTS | number of mixture components |
- Parameters
-
mixture | input/output mixture to be learned |
eta | learning factor: can start in the range [1,2) and follow a decaying schedule |
x | sample |
w | sample weight |
◆ joint_entropy_EM() [2/2]
template<uint32 NCOMPONENTS>
Online step-wise joint-entropy Expectation Maximization, as described in:
Batch and On-line Parameter Estimation of Gaussian Mixtures Based on the Joint Entropy, Yoram Singer, Manfred K. Warmuth
and further extended to support importance sampling weights w.
- Template Parameters
-
NCOMPONENTS | number of mixture components |
- Parameters
-
mixture | input/output mixture to be learned |
eta | learning factor, typically in the range [1,2) |
x | samples |
w | sample weights |
◆ stepwise_E()
template<uint32 NCOMPONENTS>
Online step-wise Expectation-Maximization E-Step, as described in:
On-line Learning of Parametric Mixture Models for Light Transport Simulation, Vorba et al.
- Template Parameters
-
NCOMPONENTS | number of mixture components |
- Parameters
-
mixture | input/output mixture to be learned |
eta | learning factor: must follow a schedule equal to i^-alpha where i is the sample index, and alpha is in [0.6,0.9] |
x | sample |
w | sample weight |
u | sufficient statistics |
◆ stepwise_M()
template<uint32 NCOMPONENTS>
Online step-wise Expectation-Maximization M-Step, as described in:
On-line Learning of Parametric Mixture Models for Light Transport Simulation, Vorba et al.
- Template Parameters
-
NCOMPONENTS | number of mixture components |
- Parameters
-
mixture | input/output mixture to be learned |
eta | learning factor: must follow a schedule equal to i^-alpha where i is the sample index, and alpha is in [0.6,0.9] |
x | sample |
w | sample weight |
u | sufficient statistics |