Show Reference: "Reading population codes: a neural implementation of ideal observers."

Reading population codes: a neural implementation of ideal observers. Nature neuroscience, Vol. 2, No. 8. (August 1999), pp. 740-745, doi:10.1038/11205 by S. Deneve, P. E. Latham, A. Pouget
@article{deneve-et-al-1999,
    abstract = {Many sensory and motor variables are encoded in the nervous system by the activities of large populations of neurons with bell-shaped tuning curves. Extracting information from these population codes is difficult because of the noise inherent in neuronal responses. In most cases of interest, maximum likelihood ({ML}) is the best read-out method and would be used by an ideal observer. Using simulations and analysis, we show that a close approximation to {ML} can be implemented in a biologically plausible model of cortical circuitry. Our results apply to a wide range of nonlinear activation functions, suggesting that cortical areas may, in general, function as ideal observers of activity in preceding areas.},
    author = {Deneve, S. and Latham, P. E. and Pouget, A.},
    doi = {10.1038/11205},
    issn = {1097-6256},
    journal = {Nature neuroscience},
    keywords = {bayes, population-coding},
    month = aug,
    number = {8},
    pages = {740--745},
    pmid = {10412064},
    posted-at = {2012-07-02 16:52:15},
    priority = {2},
    title = {Reading population codes: a neural implementation of ideal observers.},
    url = {http://dx.doi.org/10.1038/11205},
    volume = {2},
    year = {1999}
}

See the CiteULike entry for more info, PDF links, BibTex etc.

Deneve et al. propose a recurrent network which is able to fit a template to (Poisson-)noisy input activity, implementing an estimator of the original input. The authors show analytically and in simulations that the network is able to approximate a maximum likelihood estimator. The network’s dynamics are governed by divisive normalization and the neural input tuning curves are hard-wired.

MLE provides an optimal method of reading population codes.

Yamashita et al. modify Deneve et al.'s network by weakening divisive normalization and lateral inhibition. Thus, their network integrates localization if the disparity between localizations in simulated modalities is low, and maintains multiple hills of activation if disparity is high, thus accounting for the ventriloquism effect.

Deneve et al.'s model uses divisive normalization.

Yamashita et al. argue that, since whether or not two stimuli in different modalities with a certain disparity are integrated depends on the weight profiles in their network, a Bayesian prior is somehow encoded in these weights.

Deneve et al.'s model (2001) does not compute a population code; it mainly recovers a clean population code from a noisy one.