Show Tag: neural-coding

Select Other Tags

Combinatorial or compositional codes are more efficient than codes based on grandmother-cells in terms of the number of cells to code an instance. They also generalize better.

Sparse coding is a compromise between the simplicity of grandmother-cell-type codes and the efficiency and ability for generalization of combinatorial (dense) codes.

A network with Hebbian and anti-Hebbian learning can produce a sparse code. Excitatory connections from input to output are learned Hebbian while inhibition between output neurons are learned anti-Hebbian.

Trigger feature hypothesis: early hypothesis on neural coding. One perceptual feature triggers activity in one neuron.

The trigger feature hypothesis in principle postulates combinatorial codes (or even sparse coding).

Neural codes with overlapping receptive fields are less likely to be corrupted by noise than codes in which each neuron codes for only one value.

Tabareau et al. propose a scheme for a transformation from the topographic mapping in the SC to the temporal code of the saccadic burst generators.

According to their analysis, that code needs to be either linear or logarithmic.

Activity in the deep SC has been described as different regions competing for access to motor resources.

The amount of information encoded in neural spiking (within a certain time window) is finite and can be estimated.

Noisy neuronal responses can improve information transmission in populations (especially when neurons are threshold-like).

Not all neurons are created equal (and that's a good thing): Populations of neurons with diverse neural parameters can represent information better than populations of identical neurons. (Think different threshold values, regions of linearity in transfer functions)

Adding both noise and hetereogeneity to a population-coding network does not always improve coding.

Hunsberger et al. suggest that neural heterogeneity and response stochasticity both decorrelate and linearize population responses and thus improve transmission of information.

The goal of generative models is `to learn representations that are economical to describe but allow the input to be reconstructed accurately'.

Not sure how the goal of a generative model can be learning anything. However, it is clear that describing the parameters must require less information than describing the sensation.

Since combinatorial and sparse codes are known to be efficient, it would make sense if the states of generative models would usually be encoded in them.

That would preclude redundant population codes—except if we use the notion of redundancy in our idea of efficiency.

The `efficient coding principle' states that a neural ensemble should encode as much information as possible in its response.

Cells in inferotemporal cortex are highly selective to the point where they approach being grandmother cells.