# Show Reference: "Modeling multisensory enhancement with self-organizing maps"

Modeling multisensory enhancement with self-organizing maps. Frontiers in Computational Neuroscience, Vol. 3 (2009), doi:10.3389/neuro.10.008.2009 by Jacob G. Martin, A. Meredith Alex, Khurshid Ahmad
@article{martin-et-al-2009,
abstract = {Self-organization, a process by which the internal organization of a system changes without supervision, has been proposed as a possible basis for multisensory enhancement ({MSE}) in the superior colliculus (Anastasio and Patton, 2003). We simplify and extend these results by presenting a simulation using traditional self-organizing maps, intended to understand and simulate {MSE} as it may generally occur throughout the central nervous system. This simulation of {MSE}: (1) uses a standard unsupervised competitive learning algorithm, (2) learns from artificially generated activation levels corresponding to driven and spontaneous stimuli from separate and combined input channels, (3) uses a sigmoidal transfer function to generate quantifiable responses to separate inputs, (4) enhances the responses when those same inputs are combined, (5) obeys the inverse effectiveness principle of multisensory integration, and (6) can topographically congregate {MSE} in a manner similar to that seen in cortex. Thus, the model provides a useful method for evaluating and simulating the development of enhanced interactions between responses to different sensory modalities.},
author = {Martin, Jacob G. and Meredith Alex, A. and Ahmad, Khurshid},
doi = {10.3389/neuro.10.008.2009},
issn = {1662-5188},
journal = {Frontiers in Computational Neuroscience},
keywords = {ann, architecture, cue-combination, development, enhancement, learning, localization, sc, som},
pmcid = {PMC2713735},
pmid = {19636382},
posted-at = {2012-05-22 15:07:35},
priority = {2},
title = {Modeling multisensory enhancement with self-organizing maps},
url = {http://dx.doi.org/10.3389/neuro.10.008.2009},
volume = {3},
year = {2009}
}


Input in Martin et al.'s model of multisensory integration in the SC is an $m$-dimensional vector for every data point, where $m$ is the number of modalities. Data points are uni-modal, bi-modal, or tri-modal. Each dimension of the data point codes stochastically for the combination of modalities of the data point. The SOM learns to map different modality combinations to different regions into its two-dimensional grid.