Show Reference: "Synaptic computation underlying probabilistic inference"

Synaptic computation underlying probabilistic inference Nature Neuroscience, Vol. 13, No. 1. (13 January 2010), pp. 112-119, doi:10.1038/nn.2450 by Alireza Soltani, Xiao-Jing Wang
    abstract = {We propose that synapses may be the workhorse of the neuronal computations that underlie probabilistic reasoning. We built a neural circuit model for probabilistic inference in which information provided by different sensory cues must be integrated and the predictive powers of individual cues about an outcome are deduced through experience. We found that bounded synapses naturally compute, through reward-dependent plasticity, the posterior probability that a choice alternative is correct given that a cue is presented. Furthermore, a decision circuit endowed with such synapses makes choices on the basis of the summed log posterior odds and performs near-optimal cue combination. The model was validated by reproducing salient observations of, and provides insights into, a monkey experiment using a categorization task. Our model thus suggests a biophysical instantiation of the Bayesian decision rule, while predicting important deviations from it similar to the 'base-rate neglect' observed in human studies when alternatives have unequal prior probabilities.},
    author = {Soltani, Alireza and Wang, Xiao-Jing},
    day = {13},
    doi = {10.1038/nn.2450},
    issn = {1546-1726},
    journal = {Nature Neuroscience},
    keywords = {ann, bayes, model},
    month = jan,
    number = {1},
    pages = {112--119},
    pmcid = {PMC2921378},
    pmid = {20010823},
    posted-at = {2013-01-11 08:19:08},
    priority = {2},
    publisher = {Nature Publishing Group},
    title = {Synaptic computation underlying probabilistic inference},
    url = {},
    volume = {13},
    year = {2010}

See the CiteULike entry for more info, PDF links, BibTex etc.

Soltani and Wang propose an adaptive neural model of Bayesian inference neglecting any priors and claim that it is consistent with certain observations in biology.

LIP seems to encode decision variables for saccade direction.

Soltani and Wang argue that their model is consistent with the 'base rate neglect' fallacy.

The base rate fallacy is a fallacy occuring in human decision making in which humans estimate a posterior probability without properly taking account of the prior probability (i.e. solely on the basis of the likelihood).

Soltani and Wang propose an adaptive model of Bayesian inference with binary cues.

In their model, a synaptic weight codes for the ratio of synapses in a set which are activated vs. de-activated by the binary cue encoded in their pre-synaptic axon's activity.

The stochastic Hebbian learning rule makes the synaptic weights correctly encode log posterior probabilities and the neurons will encode reward probability correctly.

Soltani and Wang propose a learning algorithm in which neurons predict rewards for actions based on individual cues. The winning neuron stochastically gets reward depending on the action taken.

One of the benefits of Soltani and Wang's model is that it does not require their neurons to perform complex computations. By simply counting active synapses, they calculate log probabilities of reward. The learning rule is what makes sure the correct number of neurons are active given the input.

Soltani and Wang only consider percepts and reward. They do not model any generative causes behind the two.

LIP is retinotopic and involved in gaze shifts.