Show Tag: latent-variable-model

Select Other Tags

SOMs fail to learn what's interesting if what's not interesting (the noise) better explains the data.

Bishop et al.'s goal in introducing generative topographic mapping was not biological plausibility.

Generative Topographic Mapping produces PDFs for latent variables given data points.

SOMs learn latent-variable models.

A latent variable model is a model of the relationship between observables and hidden, latent variables such that

  • the manifestations of the observed variables are the result of the latent variables,
  • the values of the observables are conditionally independent wrt. latent variables.

If the noise in the inputs to my SOM isn't uncorrelated between input neurons, then the SOM cannot properly learn a latent variable model.

There can be situations where my algorithm is still optimal or near-optimal.

A Deep Belief Network is a multi-layered, feed-forward network in which each successive layer infers about latent variables of the input from the output of its preceding layers.

SOMs treat all their input dimensions as observables of some latent variable. It is possible to give data points a dimension containing labels. These labels will not have a greater effect on learning than the other dimensions of the data point. This is especially true if the true labels are not good predictors of the actual latent variable.

My SOMs learn competitively. But they actually don't encode error but latent variables.

Bauer and Wermter present an ANN algorithm which takes from the self-organizing map (SOM) algorithm the ability to learn a latent variable model from its input. They extend the SOM algorithm so it learns about the distribution of noise in the input and computes probability density functions over the latent variables. The algorithm represents these probability density functions using population codes. This is done with very few assumptions about the distribution of noise.