The model of natural multisensory integration and localization is based on the leaky integrate-and-fire neuron model.⇒
Rucci et al. explain audio-visual map registration and learning of orienting responses to audio-visual stimuli by what they call value-dependent learning: After each motor response, a modulatory system evaluated whether that response was good, bringing the target into the center of the visual field of the system, or bad. The learning rule used by the system was such that it strengthened connections between neurons from the different neural subpopulations of the network if they were highly correlated whenever the modulatory response was strong, and weakened otherwise.⇒
The leaky-integrate-and-fire model due to Rowland and Stein models a single multisensory SC neuron receiving input from a number of sensory, cortical, and sub-cortical sources.
Each of the sources is modeled as a single input to the SC neuron.
Local inhibitory interaction between neurons in multi-sensory trials is modeled by a single time-variant subtractive term which sets in shortly after the actual sensory input, thus not influencing the first phase of the response after stimulus onset.⇒
Deneve describes neurons as integrating probabilities based on single incoming spikes. Spikes are seen as outcomes of Poisson processes and neurons are to infer the hidden value of those processes' parameter(s). She uses the leaky integrate-and-fire neuron as the basis for her model.⇒
Deneve's model is not a leaky integrate-and-fire (LIF) model, but she demonstrates the connection. She states that LIF is `far from describing the dynamics of real neurons'.⇒