Show Reference: "Towards Computational Modelling of Neural Multimodal Integration Based on the Superior Colliculus Concept"

Towards Computational Modelling of Neural Multimodal Integration Based on the Superior Colliculus Concept In Innovations in Neural Information Paradigms and Applications, Vol. 247 (2009), pp. 269-291, doi:10.1007/978-3-642-04003-0_11 by Kiran Ravulakollu, Michael Knowles, Jindong Liu, Stefan Wermter edited by Monica Bianchini, Marco Maggini, Franco Scarselli, Lakhmi Jain
@incollection{ravulakollu-et-al-2009,
    abstract = {Information processing and responding to sensory input with appropriate actions are among the most important capabilities of the brain and the brain has specific areas that deal with auditory or visual processing. The auditory information is sent first to the cochlea, then to the inferior colliculus area and then later to the auditory cortex where it is further processed so that then eyes, head or both can be turned towards an object or location in response. The visual information is processed in the retina, various subsequent nuclei and then the visual cortex before again actions will be performed. However, how is this information integrated and what is the effect of auditory and visual stimuli arriving at the same time or at different times? Which information is processed when and what are the responses for multimodal stimuli? Multimodal integration is first performed in the Superior Colliculus, located in a subcortical part of the midbrain. In this chapter we will focus on this first level of multimodal integration, outline various approaches of modelling the superior colliculus, and suggest a model of multimodal integration of visual and auditory information.},
    address = {Berlin, Heidelberg},
    author = {Ravulakollu, Kiran and Knowles, Michael and Liu, Jindong and Wermter, Stefan},
    booktitle = {Innovations in Neural Information Paradigms and Applications},
    chapter = {11},
    doi = {10.1007/978-3-642-04003-0\_11},
    editor = {Bianchini, Monica and Maggini, Marco and Scarselli, Franco and Jain, Lakhmi},
    isbn = {978-3-642-04002-3},
    keywords = {alignment, ann, architecture, auditory, cue-combination, enhancement, localization, model, multi-modality, sc, suppression, visual},
    pages = {269--291},
    posted-at = {2011-07-27 14:57:08},
    priority = {2},
    publisher = {Springer Berlin / Heidelberg},
    series = {Studies in Computational Intelligence},
    title = {Towards Computational Modelling of Neural Multimodal Integration Based on the Superior Colliculus Concept},
    url = {http://dx.doi.org/10.1007/978-3-642-04003-0\_11},
    volume = {247},
    year = {2009}
}

See the CiteULike entry for more info, PDF links, BibTex etc.

On the behavioral side, cross-modal enhancement and depression have been identified as increasing and decreasing the relevance of a stimulus in one modality based on the influence of stimuli in another modality.