Show Reference: "Bayesian Spiking Neurons I: Inference"

Bayesian Spiking Neurons I: Inference Neural Computation, Vol. 20, No. 1. (28 November 2008), pp. 91-117, doi:10.1162/neco.2008.20.1.91 by Sophie Deneve
@article{deneve-2008,
    abstract = {We show that the dynamics of spiking neurons can be interpreted as a form of Bayesian inference in time. Neurons that optimally integrate evidence about events in the external world exhibit properties similar to leaky integrate-and-fire neurons with spike-dependent adaptation and maximally respond to fluctuations of their input. Spikes signal the occurrence of new information?what cannot be predicted from the past activity. As a result, firing statistics are close to Poisson, albeit providing a deterministic representation of probabilities.},
    author = {Deneve, Sophie},
    day = {28},
    doi = {10.1162/neco.2008.20.1.91},
    issn = {0899-7667},
    journal = {Neural Computation},
    keywords = {bayes, math, spiking},
    month = nov,
    number = {1},
    pages = {91--117},
    pmid = {18045002},
    posted-at = {2012-11-09 09:13:52},
    priority = {2},
    publisher = {MIT Press},
    title = {Bayesian Spiking Neurons I: Inference},
    url = {http://dx.doi.org/10.1162/neco.2008.20.1.91},
    volume = {20},
    year = {2008}
}

See the CiteULike entry for more info, PDF links, BibTex etc.

Deneve describes neurons as integrating probabilities based on single incoming spikes. Spikes are seen as outcomes of Poisson processes and neurons are to infer the hidden value of those processes' parameter(s). She uses the leaky integrate-and-fire neuron as the basis for her model.

Deneve models a changing world; hidden variables may change according to a Marcov chain. Her neural model deals with that. Wow.

Hidden variables in Deneve's model seem to be binary. Differences in synapses (actually, their input) are due to weights describing how `informative' of the hidden variable they are.

Leakiness of neurons in Deneve's model are due to changing world conditions.

Neurons in Deneve's model actually generate Poisson-like output themselves (though deterministically).

The process it generates is described as predictive. A neuron $n_1$ fires if the probability $P_1(t)$ estimated by $n_1$ based on its input is greater than the probability $P_2(t)$ estimated by another neuron $n_2$ based on $n_1$'s input.

Deneve's model is not a leaky integrate-and-fire (LIF) model, but she demonstrates the connection. She states that LIF is `far from describing the dynamics of real neurons'.

Although their spiking behavior is described by non-linear functions, the output rate of Deneve's neurons is a linear (rectified) function of the (rate-coded) input.

Deneve describes how neurons performing Bayesian inference on variables behind Poisson inputs can learn the parameters of the Poisson processes in an online variant of the expectation maximization (EM) algorithm.