Fermat
Functions
Expectation-Maximization

Detailed Description

This module provides functions for performing Expectation-Maximization, both offline, online and with step-wise algorithms

Functions

template<uint32 N, uint32 NCOMPONENTS>
CUGAR_HOST_DEVICE void cugar::EM (Mixture< Gaussian_distribution_2d, NCOMPONENTS > &mixture, const float eta, const Vector2f *x, const float *w)
 
template<uint32 NCOMPONENTS>
CUGAR_HOST_DEVICE void cugar::joint_entropy_EM (Mixture< Gaussian_distribution_2d, NCOMPONENTS > &mixture, const float eta, const Vector2f x, const float w=1.0f)
 
template<uint32 NCOMPONENTS>
CUGAR_HOST_DEVICE void cugar::joint_entropy_EM (Mixture< Gaussian_distribution_2d, NCOMPONENTS > &mixture, const float eta, const uint32 N, const Vector2f *x, const float *w)
 
template<uint32 NCOMPONENTS>
CUGAR_HOST_DEVICE void cugar::stepwise_E (Mixture< Gaussian_distribution_2d, NCOMPONENTS > &mixture, const float eta, const Vector2f x, const float w, Matrix< float, NCOMPONENTS, 8 > &u)
 
template<uint32 NCOMPONENTS>
CUGAR_HOST_DEVICE void cugar::stepwise_M (Mixture< Gaussian_distribution_2d, NCOMPONENTS > &mixture, const Matrix< float, NCOMPONENTS, 8 > &u, const uint32 N)
 

Function Documentation

◆ EM()

template<uint32 N, uint32 NCOMPONENTS>
CUGAR_HOST_DEVICE void cugar::EM ( Mixture< Gaussian_distribution_2d, NCOMPONENTS > &  mixture,
const float  eta,
const Vector2f x,
const float *  w 
)

Traditional Expectation Maximization for a statically-sized batch. This algorithm can also be called over and over to refine the same mixture in step-wise fashion, on fixed size mini-batches.

Template Parameters
Nbatch size
NCOMPONENTSnumber of mixture components
Parameters
mixtureinput/output mixture to be learned
etalearning factor, typically in the range [1,2)
xsamples
wsample weights

◆ joint_entropy_EM() [1/2]

template<uint32 NCOMPONENTS>
CUGAR_HOST_DEVICE void cugar::joint_entropy_EM ( Mixture< Gaussian_distribution_2d, NCOMPONENTS > &  mixture,
const float  eta,
const Vector2f  x,
const float  w = 1.0f 
)

Online joint-entropy Expectation Maximization, as described in:

Batch and On-line Parameter Estimation of Gaussian Mixtures Based on the Joint Entropy, Yoram Singer, Manfred K. Warmuth

and further extended to support importance sampling weights w.

Template Parameters
NCOMPONENTSnumber of mixture components
Parameters
mixtureinput/output mixture to be learned
etalearning factor: can start in the range [1,2) and follow a decaying schedule
xsample
wsample weight

◆ joint_entropy_EM() [2/2]

template<uint32 NCOMPONENTS>
CUGAR_HOST_DEVICE void cugar::joint_entropy_EM ( Mixture< Gaussian_distribution_2d, NCOMPONENTS > &  mixture,
const float  eta,
const uint32  N,
const Vector2f x,
const float *  w 
)

Online step-wise joint-entropy Expectation Maximization, as described in:

Batch and On-line Parameter Estimation of Gaussian Mixtures Based on the Joint Entropy, Yoram Singer, Manfred K. Warmuth

and further extended to support importance sampling weights w.

Template Parameters
NCOMPONENTSnumber of mixture components
Parameters
mixtureinput/output mixture to be learned
etalearning factor, typically in the range [1,2)
xsamples
wsample weights

◆ stepwise_E()

template<uint32 NCOMPONENTS>
CUGAR_HOST_DEVICE void cugar::stepwise_E ( Mixture< Gaussian_distribution_2d, NCOMPONENTS > &  mixture,
const float  eta,
const Vector2f  x,
const float  w,
Matrix< float, NCOMPONENTS, 8 > &  u 
)

Online step-wise Expectation-Maximization E-Step, as described in:

On-line Learning of Parametric Mixture Models for Light Transport Simulation, Vorba et al.

Template Parameters
NCOMPONENTSnumber of mixture components
Parameters
mixtureinput/output mixture to be learned
etalearning factor: must follow a schedule equal to i^-alpha where i is the sample index, and alpha is in [0.6,0.9]
xsample
wsample weight
usufficient statistics

◆ stepwise_M()

template<uint32 NCOMPONENTS>
CUGAR_HOST_DEVICE void cugar::stepwise_M ( Mixture< Gaussian_distribution_2d, NCOMPONENTS > &  mixture,
const Matrix< float, NCOMPONENTS, 8 > &  u,
const uint32  N 
)

Online step-wise Expectation-Maximization M-Step, as described in:

On-line Learning of Parametric Mixture Models for Light Transport Simulation, Vorba et al.

Template Parameters
NCOMPONENTSnumber of mixture components
Parameters
mixtureinput/output mixture to be learned
etalearning factor: must follow a schedule equal to i^-alpha where i is the sample index, and alpha is in [0.6,0.9]
xsample
wsample weight
usufficient statistics