# Show Thoughts

Multilayer perceptrons can approximate any continuous function with only a single hidden layer.

If an MLP fails to approximate a certain function, this can be due to

• inadequate learning procedure,
• inadequate number of hidden units (not layers),
• noise.

In principle, a three-layer feedforward network should be capable of approximating any (continuous) function.