Show Tag: competitive-learning

Select Other Tags

Rucci et al. present an algorithm which performs auditory localization and combines auditory and visual localization in a common SC map. The mapping between the representations is learned using value-dependent learning.

k-means is a special case of the EM algorithm

"Stochastic competitive learning behaves as a form of adaptive quantization", because the centroids being adapted distribute themselves in the data space such that they minimize the quantization error (according to the distance metric being used).

Regular Hebbian learning leads to all neurons responding to the same input. One method to force neurons to specialize is competitive learning.

Competitive learning can be implemented in ANN by strong, constant inhibitory connections between competing neurons.

Simple competitive neural learning with constant inhibitory connections between competing neurons leads to grandmother-type cells.

Simple competitive neural learning with constant inhibitory connections between competing neurons produces a code that facilitates further processing.

A network with Hebbian and anti-Hebbian learning can produce a sparse code. Excitatory connections from input to output are learned Hebbian while inhibition between output neurons are learned anti-Hebbian.

Pavlou and Casey model the SC.

They use Hebbian, competitive learning to learn and topographic mapping between modalities.

They also simulate cortical input.

Learning in RBMs is competitive but without explicit inhibition (because the RBM is restricted in that it does not have recurrent connections). Neurons learn different things due to random initialization and stochastic processing.

My SOMs learn competitively. But they actually don't encode error but latent variables.