Self-Organized Neural Learning of Statistical Inference from High-Dimensional Data In International Joint Conference on Artificial Intelligence (IJCAI) (August 2013), pp. 1226-1232 by Johannes Bauer, Stefan Wermter edited by F. Rossi

@inproceedings{bauer-and-wermter-2013, address = {Beijing, China}, author = {Bauer, Johannes and Wermter, Stefan}, booktitle = {International Joint Conference on Artificial Intelligence (IJCAI)}, editor = {Rossi, F.}, keywords = {algorithmic, ann, bayes, learning, multisensory-integration, som}, location = {Beijing, China}, month = aug, pages = {1226--1232}, posted-at = {2013-08-03 10:04:40}, priority = {2}, title = {{Self-Organized} Neural Learning of Statistical Inference from {High-Dimensional} Data}, url = {http://ijcai.org/papers13/Papers/IJCAI13-185.pdf}, year = {2013} }

See the CiteULike entry for more info, PDF links, BibTex etc.

Bauer and Wermter show how probabilistic population codes and near-optimal integration can develop.⇒

My algorithms minimize the **expected** error since they take into account the probability of data points (via noise properties).⇒

My model is normative, performs optimally **and** it shows super-additivity (to be shown).⇒

Is it possible to learn the reliability of its sensory modalities from how well they agree with the consensus between the modalities under certain conditions?

Possible conditions:

- many modalities (what my 2013 model does)
- similar reliability
- enough noise
- enough remaining
*entropy*at the end of learning (worked in early versions of my SOM)⇒

SOMs learn latent-variable models.⇒

If the noise in the inputs to my SOM isn't uncorrelated between input neurons, then the SOM cannot properly learn a latent variable model.⇒

There can be situations where my algorithm is still optimal or near-optimal.⇒

A SOM in which each unit computes the probability of some value of a sensory variable produces a probabilistic population code, ie. it computes a population-coded probability density function.⇒

My SOMs learn competitively. But they actually don't encode error but latent variables.⇒

Bauer and Wermter present an ANN algorithm which takes from the self-organizing map (SOM) algorithm the ability to learn a latent variable model from its input. They extend the SOM algorithm so it learns about the distribution of noise in the input and computes probability density functions over the latent variables. The algorithm represents these probability density functions using population codes. This is done with very few assumptions about the distribution of noise. ⇒