Show Tag: backprop

Select Other Tags

The network proposed by Auer et al. comprises just one layer of parallel perceptrons and some central control entity which reads out the perceptrons' votes to compute the final result.

That central control also submits a two-bit feedback signal to the perceptrons for learning. All perceptrons receive the same feedback signal.

Auer et al. show that their network and training algorithm can achieve universal function approximation without the complex feedback signal required by backprop.

A simple MLP would probably be able to learn optimal multi-sensory integration via backprop

Using a space-coded approach instead of an MLP for learning multi-sensory integration has benefits:

  • learning is unsupervised
  • can work with missing data

Backpropagation was discovered at least four times within one decade.

If we know which kind of output we want to have and if each neuron's output is a smooth function of its input, then the change in weights to get the right output from the input can be computed using calculus.

Following this strategy, we get backpropagation

If we want to learn classification using backprop, we cannot force our network to create binary output because binary output is not a smooth function of the input.

Instead we can let our network learn to output the log probability for each class given the input.

One problem with backpropagation is that one usually starts with small weights which will be far away from optimal weights. Due to the size of the combinatorial space of weights, learning can therefore take a long time.

Backprop needs a lot of labeled data to learn classification with many classes.

It is unclear how neurons could back-propagate errors in their inputs. Thus, the biological validity of backpropagation is limited

Hinton argues that backpropagation is such a good idea that nature must have found a way to implement it somehow.

SOM (along with other competitive learning algorithms, backprop, simulated annealing, neocognitron, svm, and Bayesian models) suffers from catastrophic forgetting when forced to learn too quickly.