Show Reference: "Forming sparse representations by local anti-Hebbian learning"

Forming sparse representations by local anti-Hebbian learning. Biological Cybernetics, Vol. 64, No. 2. (1990), pp. 165-170 by Peter Földiák
@article{foeldiak-1990,
    abstract = {How does the brain form a useful representation of its environment? It is shown here that a layer of simple Hebbian units connected by modifiable {anti-Hebbian} feed-back connections can learn to code a set of patterns in such a way that statistical dependency between the elements of the representation is reduced, while information is preserved. The resulting code is sparse, which is favourable if it is to be used as input to a subsequent supervised associative layer. The operation of the network is demonstrated on two simple problems.},
    author = {F\"{o}ldi\'{a}k, Peter},
    issn = {0340-1200},
    journal = {Biological Cybernetics},
    keywords = {competitive-learning, representation, sparse-coding},
    number = {2},
    pages = {165--170},
    pmid = {2291903},
    posted-at = {2013-02-11 16:21:54},
    priority = {2},
    title = {Forming sparse representations by local {anti-Hebbian} learning.},
    url = {http://view.ncbi.nlm.nih.gov/pubmed/2291903},
    volume = {64},
    year = {1990}
}

See the CiteULike entry for more info, PDF links, BibTex etc.

Among the advantages of unsupervised learning is that it does not require labeled data, which means that there is usually more data available for learning.

Regular Hebbian learning leads to all neurons responding to the same input. One method to force neurons to specialize is competitive learning.

Competitive learning can be implemented in ANN by strong, constant inhibitory connections between competing neurons.

Simple competitive neural learning with constant inhibitory connections between competing neurons leads to grandmother-type cells.

Simple competitive neural learning with constant inhibitory connections between competing neurons produces a code that facilitates further processing.

Combinatorial or compositional codes are more efficient than codes based on grandmother-cells in terms of the number of cells to code an instance. They also generalize better.

Sparse coding is a compromise between the simplicity of grandmother-cell-type codes and the efficiency and ability for generalization of combinatorial (dense) codes.

A network with Hebbian and anti-Hebbian learning can produce a sparse code. Excitatory connections from input to output are learned Hebbian while inhibition between output neurons are learned anti-Hebbian.

A network with Hebbian and anti-Hebbian learning can produce a sparse code. Excitatory connections from input to output are learned Hebbian while inhibition between output neurons are learned anti-Hebbian.

Representing an object by only one neuron (a `grandmother cell') makes subsequent processing very easy.

In Friston's architecture, competitive learning serves to de-correlate error units.