Efficient computation and cue integration with noisy population codes *Nature Neuroscience*, Vol. 4, No. 8. (01 August 2001), pp. 826-831, doi:10.1038/90541 by Sophie Deneve, Peter E. Latham, Alexandre Pouget

@article{deneve-et-al-2001, abstract = {The brain represents sensory and motor variables through the activity of large populations of neurons. It is not understood how the nervous system computes with these population codes, given that individual neurons are noisy and thus unreliable. We focus here on two general types of computation, function approximation and cue integration, as these are powerful enough to handle a range of tasks, including sensorimotor transformations, feature extraction in sensory systems and multisensory integration. We demonstrate that a particular class of neural networks, basis function networks with multidimensional attractors, can perform both types of computation optimally with noisy neurons. Moreover, neurons in the intermediate layers of our model show response properties similar to those observed in several multimodal cortical areas. Thus, basis function networks with multidimensional attractors may be used by the brain to compute efficiently with population codes.}, address = {Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA.}, author = {Deneve, Sophie and Latham, Peter E. and Pouget, Alexandre}, day = {01}, doi = {10.1038/90541}, issn = {1097-6256}, journal = {Nature Neuroscience}, keywords = {ann, bayes, probability}, month = aug, number = {8}, pages = {826--831}, pmid = {11477429}, posted-at = {2012-01-27 12:58:18}, priority = {2}, publisher = {Nature Publishing Group}, title = {Efficient computation and cue integration with noisy population codes}, url = {http://dx.doi.org/10.1038/90541}, volume = {4}, year = {2001} }

See the CiteULike entry for more info, PDF links, BibTex etc.

DenĂ©ve et al. use basis function networks with multidimensional attractors for

- function approximation
- cue integration.

They reduce both to maximum likelihood estimation and show that their network performs close to a maximum likelihood estimator.⇒