# Show Tag: spiking

Select Other Tags

Although Adams et al. argue that biomimetic approaches to robotics promise less energy consumption and processing requirements, they implicitly acknowledge that using spiking neural networks will increase these requirements and is only feasible, if at all, because of recent developments in software and hardware.

Adams et al. present a Spiking Neural Network implementation of a SOM which uses

• spike-time dependent plasticity
• a method to adapt the learning rate
• constant neighborhood interaction width

There have been a number of attempts at spiking SOM implementations.

Adams et al. note that there have been a number of attempts at spiking SOM implementations (and list a few).

The model of biological computation of ITDs proposed by Jeffress extracts ITDs by means of delay lines and coincidence detecting neurons:

The peaks of the sound pressure at each ear lead, via a semi-mechanical process, to peaks in the activity of certain auditory nerve fibers. Those fibers connect to coincidence-detecting neurons. Different delays in connections from the two ears lead to coincidence for different ITDs, thus making these coincidence-detecting neurons selective for different angles to the sound source.

Liu et al.'s model of the IC includes a Jeffress-type model of the MSO.

Recent neurophysiological evidence seems to contradict the details of Jeffress' model.

The amount of information encoded in neural spiking (within a certain time window) is finite and can be estimated.

Deco and Rolls introduce a system that uses a trace learning rule to learn recognition of more and more complex visual features in successive layers of a neural architecture. In each layer, the specificity of the features increases together with the receptive fields of neurons until the receptive fields span most of the visual range and the features actually code for objects. This model thus is a model of the development of object-based attention.

Deneve describes neurons as integrating probabilities based on single incoming spikes. Spikes are seen as outcomes of Poisson processes and neurons are to infer the hidden value of those processes' parameter(s). She uses the leaky integrate-and-fire neuron as the basis for her model.

Deneve models a changing world; hidden variables may change according to a Marcov chain. Her neural model deals with that. Wow.

Hidden variables in Deneve's model seem to be binary. Differences in synapses (actually, their input) are due to weights describing how informative' of the hidden variable they are.

Leakiness of neurons in Deneve's model are due to changing world conditions.

Neurons in Deneve's model actually generate Poisson-like output themselves (though deterministically).

The process it generates is described as predictive. A neuron $n_1$ fires if the probability $P_1(t)$ estimated by $n_1$ based on its input is greater than the probability $P_2(t)$ estimated by another neuron $n_2$ based on $n_1$'s input.

Deneve's model is not a leaky integrate-and-fire (LIF) model, but she demonstrates the connection. She states that LIF is far from describing the dynamics of real neurons'.

Although their spiking behavior is described by non-linear functions, the output rate of Deneve's neurons is a linear (rectified) function of the (rate-coded) input.

Jerome Feldman talks about the sub-problem of "General Considerations of Coordination" of the general Binding Problem as more or less a problem of synchronization and states that modeling efforts are well underway, taking account physiological details as spiking behavior and neuron oscillations.

Morén et al. present a spiking model of SC.

Trappenberg presents a competitive spiking neural network for generating motor output of the SC.

Some models assume SC output encodes saccade amplitude and direction. In other models, each spike from a burst neuron encodes a motion segment, with length and direction depending on the position of the neuron and strength of connection to brainstem areas

Deneve describes how neurons performing Bayesian inference on variables behind Poisson inputs can learn the parameters of the Poisson processes in an online variant of the expectation maximization (EM) algorithm.

Deneve associates her EM-based learning rule in Bayesian spiking neurons with spike-time dependent plasticity (stdp)