Show Tag: poisson

Select Other Tags

In Anastasio et al.'s model of multi-sensory integration in the SC, an SC neuron is connected to one neuron from each modality whose spiking behavior is a (Poisson) probabilistic function of whether there is a target in that modality or not.

Their single SC neuron then computes the posterior probability of there being a target given its inputs (evidence) and the prior.

Under the assumption that neural noise is independent between neurons, Anastasio et al.'s approach can be extended by making each input neuron its own modality.

Bayesian integration becomes more complex, however, because receptive fields are not sharp. The formulae still hold, but the neurons cannot simply use Poisson statistics to integrate.

In Anastasio et al. use their model to explain enhancement and the principle of inverse effectiveness.

The activity profiles for stimuli moving through superficial SC neuron RFs shown in Cynader and Berman's work look similar to Poisson-noisy Gaussians, however, the authors state that the strength of a response to a stimulus was the same regardless where in the activating region it was shown.

Anastasio et al.'s model of SC neurons assumes that these neurons receive multiple inputs with Poisson noise and apply Bayes' rule to calculate the posterior probability of a stimulus being in their receptive fields.

Anastasio et al. point out that, given their model of SC neurons computing the probability of a stimulus being in their RF with Poisson-noised input, a sigmoid response function arises for uni-sensory input.

Deneve describes neurons as integrating probabilities based on single incoming spikes. Spikes are seen as outcomes of Poisson processes and neurons are to infer the hidden value of those processes' parameter(s). She uses the leaky integrate-and-fire neuron as the basis for her model.

Deneve models a changing world; hidden variables may change according to a Marcov chain. Her neural model deals with that. Wow.

Hidden variables in Deneve's model seem to be binary. Differences in synapses (actually, their input) are due to weights describing how `informative' of the hidden variable they are.

Leakiness of neurons in Deneve's model are due to changing world conditions.

Neurons in Deneve's model actually generate Poisson-like output themselves (though deterministically).

The process it generates is described as predictive. A neuron $n_1$ fires if the probability $P_1(t)$ estimated by $n_1$ based on its input is greater than the probability $P_2(t)$ estimated by another neuron $n_2$ based on $n_1$'s input.

There seems to be a linear relationship between the mean and variance of neural responses in cortex. This is similar to a Poisson distribution where the variance equals the mean, however, the linearity constant does not seem to be one in biology.

Seung and Sempolinsky introduce maximum likelihood estimation (MLE) as one possible mechanism for neural read-out. However, they state that it is not clear whether MLE can be implemented in a biologically plausible way.

Seung and Sempolinsky show that, in a population code with wide tuning curves and poisson noise and under the conditions described in their paper, the response of neurons near threshold carries exceptionally high information.

Deneve describes how neurons performing Bayesian inference on variables behind Poisson inputs can learn the parameters of the Poisson processes in an online variant of the expectation maximization (EM) algorithm.

Deneve associates her EM-based learning rule in Bayesian spiking neurons with spike-time dependent plasticity (stdp)