Show Tag: population-coding

Select Other Tags

Activity in the deep SC has been described as different regions competing for access to motor resources.

Ma, Beck, Latham and Pouget argue that optimal integration of population-coded probabilistic information can be achieved by simply adding the activities of neurons with identical receptive fields. The preconditions for this to hold are

  • independent Poisson (or other "Poisson-like") noise in the input
  • identically-shaped tuning curves in input neurons
  • a point-to-point connection from neurons in different populations with identical receptive fields to the same output neuron.

It's hard to unambiguously interpret Ma et al.'s paper, but it seems that, according to Renart and van Rossum, any other non-flat profile would also transmit the information optimally, although the decoding scheme would maybe have to be different.

Renart and van Rossum discuss optimal connection weight profiles between layers in a feed-forward neural network. They come to the conclusion that, if neurons in the input population have broad tuning curves, then Mexican-hat-like connectivity profiles are optimal.

Renart and van Rossum state that any non-flat connectivity profile between input and output layers in a feed-forward network yields optimal transmission if there is no noise in the output.

The model due to Ma et al. is simple and it requires no learning.

The `efficient coding principle' states that a neural ensemble should encode as much information as possible in its response.

A neural population may encode a probability density function if each neuron's response represents the probability (or log probability) of some concrete value of a latent variable.