Show Tag: hebbian-learning

Select Other Tags

Regular Hebbian learning leads to all neurons responding to the same input. One method to force neurons to specialize is competitive learning.

Competitive learning can be implemented in ANN by strong, constant inhibitory connections between competing neurons.

Simple competitive neural learning with constant inhibitory connections between competing neurons leads to grandmother-type cells.

A network with Hebbian and anti-Hebbian learning can produce a sparse code. Excitatory connections from input to output are learned Hebbian while inhibition between output neurons are learned anti-Hebbian.

Mixing Hebbian (unsupervised) learning with feedback can guide the unsupervised learning process in learning interesting, or task-relevant things.

The model due to Weber and Triesch combines SOM- or K-Means-like learning of features with prediction error feedback as in reinforcement learning. The model is thus able to learn relevant and disregard irrelevant features.

Pavlou and Casey model the SC.

They use Hebbian, competitive learning to learn and topographic mapping between modalities.

They also simulate cortical input.

The model due to Cuppini et al. develops low-level multisensory integration (spatial principle) such that integration happens only with higher-level input.

In their model, Hebbian learning leads to sharpening of receptive fields, overlap of receptive fields, and Integration through higher-cognitive input.

Pitti et al. use a Hebbian learning algorithm to learn somato-visual register.

Hebbian learning and in particular SOM-like algorithms have been used to model cross-sensory spatial register (eg. in the SC).