Show Tag: feedforward

Select Other Tags

It was known before Hornik et al.'s work, that specific classes of multilayer feedforward networks could approximate any continuous function.

Hornik et al. showed that multilayer feed-forward networks with arbitrary squashing functions can approximate any continuous function with only a single hidden layer with any desired accuracy (on a compact set of input patterns).

If an MLP fails to approximate a certain function, this can be due to

  • inadequate learning procedure,
  • inadequate number of hidden units (not layers),
  • noise.

In principle, a three-layer feedforward network should be capable of approximating any (continuous) function.

The network proposed by Auer et al. comprises just one layer of parallel perceptrons and some central control entity which reads out the perceptrons' votes to compute the final result.

That central control also submits a two-bit feedback signal to the perceptrons for learning. All perceptrons receive the same feedback signal.

Auer et al. show that their network and training algorithm can achieve universal function approximation without the complex feedback signal required by backprop.

With a few strong but empirically correct assumptions on the input, CNNs buy us a reduction of the number of parameters and thus better training performance compared to standard feed-forward ANNs.

A Deep Belief Network is a multi-layered, feed-forward network in which each successive layer infers about latent variables of the input from the output of its preceding layers.

Backward connections in the visual cortex show less topographical organization (`show abundant axonal bifurcation'), are more abundant than forward connections.

Feedforward connections in the visual cortex seem to be driving while feedback connections seem to be modulatory.

LGN receives more feedback projections from V1 than forward connections from the retina.

The biased competition theory of visual attention explains attention as the effect of low-level stimuli competing with each other for resources—representation and processing. According to this theory, higher-level processes/brain regions bias this competition.

The traditional reservoir computing architecture consists of input, reservoir, and output population.

The input layer is feed-forward connected to the reservoir.

The reservoir neurons are connected to each other and to the neurons in the output layer.

Only the connections from the reservoir to output neurons are learned; the others are randomly initialized and fixed.