Show Tag: self-organization

Select Other Tags

Self-organization occurs in the physical world as well as in information-processing systems. In neural-network-like systems, SOMs are not the only way of self-organization.

Hinoshita et al. define self-organization as the phenomenon of a global, coherent structure arising in a system through local interaction between its elements as opposed to through some sort of central control.

According to Hinoshita et al., recurrent neural networks are capable of self-organization.

Weber presents a Helmholtz machine extended by adaptive lateral connections between units and a topological interpretation of the network. A Gaussian prior over the population response (a prior favoring co-activation of close-by units) and training with natural images lead to spatial self-organization and feature-selectivity similar to that in cells in early visual cortex.

According to Wilson and Bednar, there are four main families of theories concerning topological feature maps:

  • input-driven self-organization,
  • minimal-wire length,
  • place-coding theory,
  • Turing pattern formation.

Wilson and Bednar argue that input-driven self-organization and turing pattern formation explain how topological maps may arise from useful processes, but they do not explain why topological maps are useful in themselves.

Kohonen cites von der Malsburg and Amari as among the first to demonstrate input-driven self-organization in machine learning.

Kohonen implies that neighborhood interaction in SOMs is an abstraction of chemical interactions between neurons in natural brain maps, which affect those neurons' plasticity, but not their current response.

Kohonen implies that neighborhood interaction in SOMs is what separates them from earlier, more bio-inspired attempts at input-driven self-organization, and what leads to computational tractability on the one hand and proper self-organization as found in natural brain maps on the other.

RNNPB learns sequences of inputs unsupervised (self-organized).

Similar parametric bias vectors are learned by the RNNPB for similar input.

ANN implementing DBN have been around for a long time (they go back at least to Fukushima's Neocognitron).

The motmap algorithm uses reinforcement learning to organize behavior in a two-dimensional map.

Self-organization may play a role in organizing auditory localization independent of visual input.

In Friston's architecture, competitive learning serves to de-correlate error units.

My SOMs learn competitively. But they actually don't encode error but latent variables.

The SOM has ancestors in von der Malsburg's "Self-Organization of Orientation Sensitive Cells in the Striate Cortex" and other early models of self-organization

Von der Malsburg introduces a simple model of self-organization which explains the organization of direction-sensitive cells in the human visual cortex.