Show Reference: "Information processing using a single dynamical node as complex system"

Information processing using a single dynamical node as complex system Nature Communications, Vol. 2 (13 September 2011), 468, doi:10.1038/ncomms1476 by Lennert Appeltant, Miguel C. Soriano, Guy Van der Sande, et al.
@article{appeltant-et-al-2011,
    author = {Appeltant, Lennert and Soriano, Miguel C. and Van der Sande, Guy and Danckaert, Jan and Massar, Serge and Dambre, Joni and Schrauwen, Benjamin and Mirasso, Claudio R. and Fischer, Ingo},
    day = {13},
    doi = {10.1038/ncomms1476},
    issn = {2041-1733},
    journal = {Nature Communications},
    keywords = {reservoir-computing},
    month = sep,
    pages = {468+},
    posted-at = {2013-04-19 09:53:02},
    priority = {2},
    publisher = {Nature Publishing Group},
    title = {Information processing using a single dynamical node as complex system},
    url = {http://dx.doi.org/10.1038/ncomms1476},
    volume = {2},
    year = {2011}
}

See the CiteULike entry for more info, PDF links, BibTex etc.

The traditional reservoir computing architecture consists of input, reservoir, and output population.

The input layer is feed-forward connected to the reservoir.

The reservoir neurons are connected to each other and to the neurons in the output layer.

Only the connections from the reservoir to output neurons are learned; the others are randomly initialized and fixed.

The number of reservoir nodes in reservoir computing is typically much larger than the number of input or output neurons.

A reservoir network therefore first translates the low-dimensional input into a high-dimensional space and back into a low-dimensional space.

The transfer functions of reservoir nodes in reservoir computing is usually non-linear. Therefore, the transfer from low-to high-dimensional space is non-linear, and linearly inseparable representations in the input layer can be transferred into linearly separable representations in the reservoir layer. Training of the linear, non-recurrent output layer is therefore enough even for problems which could not be solved with a single-layer perceptron on its own.

Reservoir networks exhibit fading memory.

A good reservoir network shows very different behavior for semantically different input, and similar behavior for semantically similar input.

Basically, that's what we want of every network, though.

Appeltant et al. replace the large reservoir population by a simple delay system consisting of just one node and a delay loop.

Instead of feeding the input into the system all at once through parallel input connections, it is time-multiplexed such that the reaction to parts of the input is already traveling through the delay loop when other parts enter the system.