Show Reference: "A SOM-based Model for Multi-sensory Integration in the Superior Colliculus"

A SOM-based Model for Multi-sensory Integration in the Superior Colliculus In The 2012 International Joint Conference on Neural Networks (IJCNN) (June 2012), pp. 1-8, doi:10.1109/ijcnn.2012.6252816 by Johannes Bauer, Cornelius Weber, Stefan Wermter
@inproceedings{bauer-et-al-2012,
    address = {Brisbane, Australia},
    abstract = {We present an algorithm based on the self-organizing map ({SOM}) which models multi-sensory integration as realized by the superior colliculus ({SC}). Our algorithm differs from other algorithms for multi-sensory integration in that it learns mappings between modalities' coordinate systems, it learns their respective reliabilities for different points in space, and uses mappings and reliabilities to perform cue integration. It does this in only one learning phase without supervision and such that calculations and data structures are local to individual neurons. Our simulations indicate that our algorithm can learn near-optimal integration of input from noisy sensory modalities.},
    author = {Bauer, Johannes and Weber, Cornelius and Wermter, Stefan},
    booktitle = {The 2012 International Joint Conference on Neural Networks (IJCNN)},
    doi = {10.1109/ijcnn.2012.6252816},
    isbn = {978-1-4673-1490-9},
    keywords = {ann, calibration, cue-combination, development, learning, localization, model, multi-modality, som, unsupervised-learning},
    location = {Brisbane, Australia},
    month = jun,
    pages = {1--8},
    posted-at = {2012-08-06 09:23:20},
    priority = {2},
    publisher = {IEEE},
    title = {A {SOM}-based Model for Multi-sensory Integration in the Superior Colliculus},
    url = {http://www.informatik.uni-hamburg.de/WTM/ps/Bauer\_IJCNN2012\_CR.pdf},
    year = {2012}
}

See the CiteULike entry for more info, PDF links, BibTex etc.

My algorithms minimize the expected error since they take into account the probability of data points (via noise properties).

Is it possible to learn the reliability of its sensory modalities from how well they agree with the consensus between the modalities under certain conditions?

Possible conditions:

  • many modalities (what my 2013 model does)
  • similar reliability
  • enough noise
  • enough remaining entropy at the end of learning (worked in early versions of my SOM)

My SOM takes care of differences in scaling between input dimensions implicitly and weights input dimensions while Kangas et al.'s SOM only learns scaling.

So, I've returned to the roots and found something interesting for applications!

My SOMs learn competitively. But they actually don't encode error but latent variables.

The SOM can be modified to take into account the variance of the input dimensions wrt. each other.

Zhou et al. use an approach similar to that of Bauer et al. They do not use pairwise cross-correlation between input modalities, but simply variances of individual modalities. It is unclear how they handle the case where one modality essentially becomes ground truth to the algorithm.

Without feedback about ground truth, a system learning from a data set of noisy corresponding values must have at least three modalities to learn their reliabilities. One way of doing this is learning pairwise correlation between modalities. It is not enough to take the best hypothesis on the basis of the currently learned reliability model and use that instead of ground truth to learn the variance of the individual modalities: If the algorithm comes to believe that one modality has near-perfect reliability, then that will determine the next best hypotheses. In effect, that modality will be ground truth for the algorithm and it will only learn how well the others predict it.

Zhou et al.'s and Bauer et al.'s statistical SOM variants assume Gaussian noise in the input.

Judging by the abstract, von der Malsburg's Democratic Integration does what I believe is impossible: it lets a system learn the reliability of its sensory modalities from how well they agree with the consensus between the modalities.

Bauer et al. present a SOM variant which learns the variance of different sensory modalities (assuming Gaussian noise) to model multi-sensory integration in the SC.