Select Other Tags

The model due to Cuppini et al. does not need neural multiplication to implement superadditivity or inverse effectiveness. Instead, it exploits the sigmoid transfer function in multi-sensory neurons: due to this sigmoid transfer function and due to less-than-unit weights between input and multi-sensory neurons, weak stimuli that fall into the low linear regions of input neurons evoke less than linear responses in multi-sensory neurons. However, the sum of two such stimuli (from different modalities) can be in their linear range and thus the result can be much greater than the sum of the individual responses. ⇒

Hornik et al. showed that multilayer feed-forward networks with **arbitrary** squashing functions can approximate any continuous function with only a single hidden layer with any desired accuracy (on a compact set of input patterns).⇒

Hornik et al. define a squashing function as a non-decreasing function $\Psi:\mathbb{R}\rightarrow [0,1]$ whose limit at infinity is 1 and whose limit at negative infinity is 0.⇒

The transfer functions of reservoir nodes in reservoir computing is usually non-linear. Therefore, the transfer from low-to high-dimensional space is non-linear, and linearly inseparable representations in the input layer can be transferred into linearly separable representations in the reservoir layer. Training of the linear, non-recurrent output layer is therefore enough even for problems which could not be solved with a single-layer perceptron on its own.⇒