Select Other Tags

The Hartigans' Dip Statistic measures unimodality in a sample: Specifically, it measures the greatest difference between the empirical cumulative distribution function and that unimodal cumulative distribution function which minimizes that greatest difference.⇒

In SOM learning, shrinking of the neighborhood size and decreasing update strength usually follow predefined schedules i.e. they only depend on the update step.⇒

In the PLSOM algorithm, update strength depends on the difference between a data point and the best-matching unit's weight vector, the quantization error. A large distance, indicating a bad representation of that data point in the SOM, leads to a stronger update than a small distance. The distance is scaled relative to the largest quantization error encountered so far.⇒

PLSOM reduces the number of parameters of the SOM algorithm from four to two.⇒

PLSOM overreacts to outliers: data points which are very unrepresentative of the data in general will change the network more strongly than they should.⇒

PLSOM2 addresses the problem of PLSOM overreacting to outliers.⇒

Viola and Jones presented a fast and robust object detection system based on

- a computationally fast way to extract features from images,
- the AdaBoost machine learning algorithm,
- cascades of weak classifiers with increasing complexities.⇒

If we know which kind of output we want to have and if each neuron's output is a smooth function of its input, then the change in weights to get the right output from the input can be computed using calculus.

Following this strategy, we get backpropagation⇒

One problem with backpropagation is that one usually starts with small weights which will be far away from optimal weights. Due to the size of the combinatorial space of weights, learning can therefore take a long time. ⇒

In the wake-sleep algorithm, (at least) two layers of neurons are fully connected to each other.

In the wake phase, the lower level drives the upper layer through the bottom-up recognition weights. The top-down generative weights are trained such that they will generate the current activity in the lower level given the current activity in the output level.

In the sleep phase, the upper layer drives activity in the lower layer through the generative weights and the recognition weights are learned such that they induce the activity in the upper layer given the activity in the lower layer.⇒

Learning in RBMs is competitive but without explicit inhibition (because the RBM is *restricted* in that it does not have recurrent connections).
Neurons learn different things due to random initialization and stochastic processing.⇒

The SOM is an asymptotically optimal vector quantizer.⇒

There is no cost function that the SOM algorithm follows exactly.⇒

Quality of order in SOMs is a difficult issue because there is no unique definition of `order' in for the $n$-dimensional case if $n>2$.

Nevertheless, there have been a number of attempts.⇒

There have been many extensions of the original SOM ANN, like

- (Growing) Neural Gas
- adaptive subspace SOM (ASSOM)
- Parameterized SOM (PSOM)
- Stochastic SOM
- recursive and recurrent SOMs⇒

Recursive and Recurrent SOMs have been used for mapping temporal data.⇒