Show Tag: population-codes

Select Other Tags

Bauer and Wermter show how probabilistic population codes and near-optimal integration can develop.

Horizontal localizations are population-coded in the central nucleus of the mustache bat inferior colliculus.

Population codes occur naturally.

Deneve et al. propose a recurrent network which is able to fit a template to (Poisson-)noisy input activity, implementing an estimator of the original input. The authors show analytically and in simulations that the network is able to approximate a maximum likelihood estimator. The network’s dynamics are governed by divisive normalization and the neural input tuning curves are hard-wired.

Beck et al. model build-up in the SC as accumulation of evidence from sensory input.

Cuijpers and Erlhagen use neural fields to implement Bayes' rule for combining the activities of neural populations spatially encoding probability distributions.

Beck et al. argue that simply adding time point-to-time point responses of a population code will integrate the information optimally if the noise in the input is what they call "Poisson-like".

That is somewhat expected as in a Poisson distribution with mean $\lambda$ the variance is $\lambda$ and the standard deviation is $\sqrt{\lambda}$ and adding population responses is equivalent to counting spikes over a longer period of time, thus increasing the mean of the distribution.

Sensory maps are not required to ensure coverage of the sensory spectrum.

A SOM in which each unit computes the probability of some value of a sensory variable produces a probabilistic population code, ie. it computes a population-coded probability density function.

MLE provides an optimal method of reading population codes.

It's hard to implement MLE on population codes using neural networks.

Depending on the application, tuning curves, and noise properties, threshold linear networks calculating population vectors can have similar performance as MLE.

Jazayeri and Movshon call population vectors and winner-takes-all mechanisms "suboptimal under most conditions of interest."

Translating a population code into just one value (or vector) discards all information about uncertainty.

Jazayeri and Movshon present an ANN model for computing likelihood functions ($\approx$ probability density functions with uniform priors) from input population responses with arbitrary tuning functions.

Their assumptions are

  • restricted types of noise characteristics (eg. Poisson noise)
  • statistically independent noise

Since they work with log likelihoods, they can circumvent the problem of requiring neural multiplication.

Since combinatorial and sparse codes are known to be efficient, it would make sense if the states of generative models would usually be encoded in them.

That would preclude redundant population codes—except if we use the notion of redundancy in our idea of efficiency.

In an efficient population code, neural responses are statistically independent.

A neural population may encode a probability density function if each neuron's response represents the probability (or log probability) of some concrete value of a latent variable.

Early visual neurons (eg. in V1) do not seem to encode probabilities.

I'm not so sure that early visual neurons don't encode probabilities. The question is: which probabilities do they encode? That of a line being there?

The later an estimate is made explicit from a (probabilistic) neural population code, the less information is lost in the conversion.

A SOM models a population and each unit has a response to a stimulus; it is therefore possible to read out a population code from a SOM. This population code is not very meaningful in the standard SOM. Given a more statistically motivated distance function, the population code can be made more meaningful.

The direction of a saccade is population-coded in the SC.

There exist two hypotheses for how saccade trajectory is population-coded in the SC:

  • the sum of the contributions of all neurons
  • the weighted average of contributions of all neurons

The difference is in whether or not the population response is normalized.

According to Lee et al., the vector summation hypothesis predicts that any deactivation of motor neurons should result in hypometric saccades because their contribution is missing.

According to the weighted average hypothesis, the error depends on where the saccade target is wrt. the preferred direction of the deactivated neurons.

Lee et al. found that de-activation of SC motor neurons did not always lead to hypometric saccades. Instead, saccades where generally too far from the preferred direction of the de-activated neurons. They counted this as supporting the vector averaging hypothesis.

Deneve et al.'s model (2001) does not compute a population code; it mainly recovers a clean population code from a noisy one.

Georgopoulos et al. introduced the notion of population coding and population vector readout.

Neural populations can compute and encode probability density functions for external variables.

Bauer and Wermter present an ANN algorithm which takes from the self-organizing map (SOM) algorithm the ability to learn a latent variable model from its input. They extend the SOM algorithm so it learns about the distribution of noise in the input and computes probability density functions over the latent variables. The algorithm represents these probability density functions using population codes. This is done with very few assumptions about the distribution of noise.

Bauer and Wermter use the algorithm they proposed to model multi-sensory integration in the SC. They show that it can learn to near-optimally integrate noisy multi-sensory information and reproduces spatial register of sensory maps, the spatial principle, the principle of inverse effectiveness, and near-optimal audio-visual integration in object localization.