Cuppini et al. present a model of the SC that exhibits many of the properties regarding neural connectivity, electrophysiology, and development that have been found experimentally in nature.⇒
The model of the SC due to Cuppini et al. reproduces development of
The model due to Cuppini et al. comprises distinct neural populations for
The model due to Cuppini et al. does not need neural multiplication to implement superadditivity or inverse effectiveness. Instead, it exploits the sigmoid transfer function in multi-sensory neurons: due to this sigmoid transfer function and due to less-than-unit weights between input and multi-sensory neurons, weak stimuli that fall into the low linear regions of input neurons evoke less than linear responses in multi-sensory neurons. However, the sum of two such stimuli (from different modalities) can be in their linear range and thus the result can be much greater than the sum of the individual responses. ⇒
Through lateral connections, a Hebbian learning rule, and approximate initialization, Cuppini et al. manage to learn register between sensory maps. This can be seen as an implementation of a SOM.⇒
Cuppini et al. use mutually inhibitive, modality-specific inhibition (inhibitory inter-neurons that get input from one modality and inhibit inhibitory interneurons receiving input from different modalities) to implement a winner-take-all mechanism between modalities; this leads to a visual (or auditory) capture effect without functional multi-sensory integration.
Their network model builds upon their earlier single-neuron model.⇒
Not sure about the biological motivation of this. Also: it would be interesting to know if functional integration still occurs.⇒
Cuppini et al. do not evaluate their model's performance (comparability to cat/human performance, optimality...)⇒
The model due to Cuppini et al. is inspired only by observed neurophysiology; it has no normative inspiration.⇒
Rucci et al. present an algorithm which performs auditory localization and combines auditory and visual localization in a common SC map. The mapping between the representations is learned using value-dependent learning.⇒
Rucci et al.'s neural network learns how to align ICx and SC (OT) maps by means of value-dependent learning: The value signal depends on whether the target was in the fovea after a saccade.⇒
Rucci et al.'s model of learning to combine ICx and SC maps does not take into account the point-to-point projections from SC to ICx reported later by Knudsen et al.⇒
Anastasio and Patton model the deep SC using SOM learning.⇒
In Anastasio and Patton's SC model, the spatial organization of the SOM is not used to represent the spatial organization of the outside world, but to distribute different sensitivities to the input modalities in different neurons.⇒
It's a bit strange that Anastasio and Patton's and Martin et al.'s SC models do not use the spatial organization of the SOM to represent the spatial organization of the outside world, but to distribute different sensitivities to the input modalities in different neurons.
KNN (or sparse coding) seems to be more appropriate for that.⇒
Beck et al. model build-up in the SC as accumulation of evidence from sensory input.⇒
Tabareau et al. propose a scheme for a transformation from the topographic mapping in the SC to the temporal code of the saccadic burst generators.
According to their analysis, that code needs to be either linear or logarithmic.⇒
Girard and Berthoz review saccade system models including models of the SC.
Except for two of the SC models, all focus on generation of saccades and do not consider sensory processing and in particular multisensory integration.⇒
The SC model presented by Cuppini et al. has a circular topology to prevent the border effect.⇒
Rucci et al. model learning of audio-visual map alignment in the barn owl SC. In their model, projections from the retina to the SC are fixed (and visual RFs are therefore static) and connections from ICx are adapted through value-dependent learning.⇒
The fact that multi-sensory integration arises without reward connected to stimuli motivates unsupervised learning approaches to SC modeling.⇒
Colonius' and Diederich's explanation for uni-sensory neurons in the deep SC has a few weaknesses: First, they model the input spiking activity for both the target and the non-target case as Poisson distributed. This is a problem, because the input spiking activity is really a function of the target distance from the center of the RF. Second, they explicitly model the probability of the visibility of a target to be independent of the probability of its audibility.⇒
The leaky-integrate-and-fire model due to Rowland and Stein models a single multisensory SC neuron receiving input from a number of sensory, cortical, and sub-cortical sources.
Each of the sources is modeled as a single input to the SC neuron.
Local inhibitory interaction between neurons in multi-sensory trials is modeled by a single time-variant subtractive term which sets in shortly after the actual sensory input, thus not influencing the first phase of the response after stimulus onset.⇒
The model due to Rowland and Stein does not consider the spatial properties of input or output. In reality, the same source of input—retina, LGN, association cortex may convey information about stimulus conditions from different regions in space and neurons at different positions in the SC react to different stimuli.⇒
Rowland and Stein focus on the temporal dynamics of multisensory integration.⇒
Rowland and Stein's goal is only to generate neural responses like those observed in real SC neurons with realistic biological constraints. The model does not give any explanation of neural responses on the functional level.⇒
The network characteristics of the SC are modeled only very roughly by Rowland and Stein's model.⇒
The model due to Rowland and Stein manages to reproduce the nonlinear time course of neural responses to, and enhancement in magnitude and inverse effectiveness in multisensory integration in the SC.
Since the model does not include spatial properties, it does not reproduce the spatial principle (ie. no depression).⇒
Patton and Anastasio present a model of "enhancement and modality-specific suppression in multi-sensory neurons" that requires no multiplicative interaction. It is a follow-up of their earlier functional model of these neurons which requires complex computation.⇒
Anastasio et al. present a model of the response properties of multi-sensory SC neurons which explains enhancement, depression, and super-addititvity using Bayes' rule: If one assumes that a neuron integrates its input to infer the posterior probability of a stimulus source being present in its receptive field, then these effects arise naturally.⇒
Anastasio et al.'s model of SC neurons assumes that these neurons receive multiple inputs with Poisson noise and apply Bayes' rule to calculate the posterior probability of a stimulus being in their receptive fields.⇒
Anastasio et al. point out that, given their model of SC neurons computing the probability of a stimulus being in their RF with Poisson-noised input, a sigmoid response function arises for uni-sensory input.⇒
The ANN model of multi-sensory integration in the SC due to Ohshiro et al. manages to replicate a number of physiological finding about the SC:
It does not learn and it has no probabilistic motivation.⇒
The ANN model of multi-sensory integration in the SC due to Ohshiro et al. uses divisive normalization to model multisensory integration in the SC.⇒
Rowland et al. derive a model of cortico-collicular multi-sensory integration from findings concerning the influence of deactivation or ablesion of cortical regions anterior ectosylvian cortex (AES) and rostral lateral suprasylvian cortex. ⇒
Rowland et al. derive a model of cortico-collicular multi-sensory integration from findings concerning the influence of deactivation or ablesion of cortical regions anterior ectosylvian cortex (AES) and rostral lateral suprasylvian cortex.
It is a single-neuron model. ⇒
Cuppini et al. expand on their earlier work in modeling cortico-tectal multi-sensory integration.
They present a model which shows how receptive fields and multi-sensory integration can arise through experience.⇒
Trappenberg presents a competitive spiking neural network for generating motor output of the SC.⇒
Need to look at models of multi-sensory integration as well; they are not necessarily models of the SC, but relevant.⇒
Anastasio et al. have come up with a Bayesian interpretation of neural responses to multi-sensory stimuli in the SC. According to their view, enhancement, depression and inverse effectiveness phenomena are due to neurons integrating uncertain information from different sensory modalities.⇒
Rucci et al. model multi-sensory integration in the barn owl OT using leaky integrator firing-rate neurons and reinforcement learning.⇒
Rucci et al. test their model of multi-sensory integration in the barn owl OT in a robot.⇒
Rucci et al. suggest that high saliency in the center of the visual field can act as a reward signal for pre-saccadic neural activation.⇒
Pitti et al. use a Hebbian learning algorithm to learn somato-visual register.⇒
Hebbian learning and in particular SOM-like algorithms have been used to model cross-sensory spatial register (eg. in the SC).⇒
Bauer et al. present a SOM variant which learns the variance of different sensory modalities (assuming Gaussian noise) to model multi-sensory integration in the SC.⇒
Bauer and Wermter use the algorithm they proposed to model multi-sensory integration in the SC. They show that it can learn to near-optimally integrate noisy multi-sensory information and reproduces spatial register of sensory maps, the spatial principle, the principle of inverse effectiveness, and near-optimal audio-visual integration in object localization. ⇒
Pitti et al. claim that their model explains preference for face-like visual stimuli and that their model can help explain imitation in newborns. According to their model, the SC would develop face detection through somato-visual integration.⇒