Show Reference: "Variants of self-organizing maps"

Variants of self-organizing maps IEEE Transactions on Neural Networks In Neural Networks, IEEE Transactions on, Vol. 1, No. 1. (06 March 1990), pp. 93-99, doi:10.1109/72.80208 by Jari A. Kangas, Teuvo K. Kohonen, Jorma T. Laaksonen
@article{kangas-et-al-1990,
    abstract = {Self-organizing maps have a bearing on traditional vector quantization. A characteristic that makes them more closely resemble certain biological brain maps, however, is the spatial order of their responses, which is formed in the learning process. A discussion is presented of the basic algorithms and two innovations: dynamic weighting of the input signals at each input of each cell, which improves the ordering when very different input signals are used, and definition of neighborhoods in the learning algorithm by the minimal spanning tree, which provides a far better and faster approximation of prominently structured density functions. It is cautioned that if the maps are used for pattern recognition and decision process, it is necessary to fine tune the reference vectors so that they directly define the decision borders},
    author = {Kangas, Jari A. and Kohonen, Teuvo K. and Laaksonen, Jorma T.},
    booktitle = {Neural Networks, IEEE Transactions on},
    day = {06},
    doi = {10.1109/72.80208},
    issn = {10459227},
    journal = {IEEE Transactions on Neural Networks},
    keywords = {learning, som, unsupervised-learning},
    month = mar,
    number = {1},
    pages = {93--99},
    posted-at = {2012-06-18 15:15:49},
    priority = {2},
    title = {Variants of self-organizing maps},
    url = {http://dx.doi.org/10.1109/72.80208},
    volume = {1},
    year = {1990}
}

See the CiteULike entry for more info, PDF links, BibTex etc.

Kohonen names normalization of input dimensions as a remedy for differences in scaling between these dimensions. He does not cite another paper of his (with colleagues) in which he presents a SOM that learns this scaling.

My SOM takes care of differences in scaling between input dimensions implicitly and weights input dimensions while Kangas et al.'s SOM only learns scaling.