Cuppini et al. use mutually inhibitive, modality-specific inhibition (inhibitory inter-neurons that get input from one modality and inhibit inhibitory interneurons receiving input from different modalities) to implement a winner-take-all mechanism between modalities; this leads to a visual (or auditory) capture effect without functional multi-sensory integration.
Their network model builds upon their earlier single-neuron model.⇒
Localization of audiovisual targets is usually determined more by the location of the visual sub-target than on that of the auditory sub-target.
Especially in situations where visual stimuli are seen clearly and thus localized very easily, this can lead to the so-called ventriloquism effect (aka `visual capture') in which a sound source seems to be localized at the location of the visual target although it is in fact a few degrees away from it. ⇒
Battaglia et al. studied the spatial ventriloquism effect and found that in their experiment subjects didn't either exactly follow an MLE model nor had their localization captured completely by vision.⇒
Alais and Burr found in an audio-visual localization experiment that the ventriloquism effect can be interpreted by a simple cue weighting model of human multi-sensory integration:
Their subjects weighted visual and auditory cues depending on their reliability. The weights they used were consistent with MLE. In most situations, visual cues are much more reliable for localization than are auditory cues. Therefore, a visual cue is given so much greater weight that it captures the auditory cue.⇒
The ventriloquism aftereffect occurs when an auditory stimulus is initially presented together with a visual stimulus with a certain spatial offset.
The auditory stimulus is typically localized by subjects at the same position as the visual stimulus, and this mis-localization prevails even after the visual stimulus disappears.⇒