Show Tag: neural-computation

Select Other Tags

Carandini et al. argue for compositionality of mechanisms and mechanism sketches.

The replacement hypothesis of embodiment (not the one in anthropology) states that a description of the dynamics of body, brain, and environment can replace a description of human cognition in terms of representations and computational processes.

Not all neurons are created equal (and that's a good thing): Populations of neurons with diverse neural parameters can represent information better than populations of identical neurons. (Think different threshold values, regions of linearity in transfer functions)

Adding both noise and hetereogeneity to a population-coding network does not always improve coding.

Hunsberger et al. suggest that neural heterogeneity and response stochasticity both decorrelate and linearize population responses and thus improve transmission of information.

According to Carandini and Heeger, structures on the level of microcircuits which are repeated throughout the brain implement what the authors call `canonical neural computations'. Well-known examples of such canonical neural computations are:

  • exponentiation
  • linear filtering.

Another canonical neural computation proposed by Carandini and Heeger is (divisive) normalization.

Divisive normalization models describe neural responses well in cases of

  • olfactory perception in drosophila,
  • visual processing in retina and V1,
  • possibly in other cortical areas,
  • modulation of responses through attention in visual cortex.

Divisive normalization models describe neural responses well in a number of instances of sensory processing.

Divisive normalization is probably implemented through (GABA-ergic) inhibition in some cases (fruitfly olfactory system). In others (V1), it seems to be implemented by different means.