Show Tag: function-approximation

Select Other Tags

Single layer perceptrons cannot approximate every continuous function.

Multilayer perceptrons can approximate any continuous function with only a single hidden layer.

It was known before Hornik et al.'s work, that specific classes of multilayer feedforward networks could approximate any continuous function.

Hornik et al. showed that multilayer feed-forward networks with arbitrary squashing functions can approximate any continuous function with only a single hidden layer with any desired accuracy (on a compact set of input patterns).

If an MLP fails to approximate a certain function, this can be due to

  • inadequate learning procedure,
  • inadequate number of hidden units (not layers),
  • noise.

In principle, a three-layer feedforward network should be capable of approximating any (continuous) function.

The network proposed by Auer et al. comprises just one layer of parallel perceptrons and some central control entity which reads out the perceptrons' votes to compute the final result.

That central control also submits a two-bit feedback signal to the perceptrons for learning. All perceptrons receive the same feedback signal.

Auer et al. show that their network and training algorithm can achieve universal function approximation without the complex feedback signal required by backprop.