Show Reference: "A Neural Network Model of Ventriloquism Effect and Aftereffect"

A neural network model of ventriloquism effect and aftereffect. PloS one, Vol. 7, No. 8. (3 August 2012), e42503, doi:10.1371/journal.pone.0042503 by Elisa Magosso, Cristiano Cuppini, Mauro Ursino
    abstract = {
                Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.
    author = {Magosso, Elisa and Cuppini, Cristiano and Ursino, Mauro},
    day = {3},
    doi = {10.1371/journal.pone.0042503},
    issn = {1932-6203},
    journal = {PloS one},
    keywords = {ann, audio, model, multisensory-integration, ventriloquism-effect, visual},
    month = aug,
    number = {8},
    pages = {e42503+},
    pmcid = {PMC3411784},
    pmid = {22880007},
    posted-at = {2013-08-27 03:27:18},
    priority = {2},
    publisher = {Public Library of Science},
    title = {A neural network model of ventriloquism effect and aftereffect.},
    url = {},
    volume = {7},
    year = {2012}

See the CiteULike entry for more info, PDF links, BibTex etc.

Magosso et al. present a recurrent ANN model which replicates the ventriloquism effect and the ventriloquism aftereffect.

The ventriloquism aftereffect occurs when an auditory stimulus is initially presented together with a visual stimulus with a certain spatial offset.

The auditory stimulus is typically localized by subjects at the same position as the visual stimulus, and this mis-localization prevails even after the visual stimulus disappears.