Show Tag: dimensionality-reduction

Select Other Tags

SOMs can be used as a means of learning principal manifolds.

There is the hypothesis that complex motions are comprised of combinations of simple muscle synergies, which would reduce the dimensionality of the control signal.

A low-dimensional representation of motion patterns in a high-dimensional space restricts the actual dimensionality of those motions.

I'm not so sure that a low-dimensional representation of motion patterns in a high-dimensional space necessarily restricts the actual dimensionality of those motions:

$\mathbb{Q}^3$ is bijective to $\mathbb{Q}$ (right?).

It is probably the case for natural behavior, though.

The concept of reduction of the dimensionality of motor space by using motor synergies has been used in robotics.

Zhang et al. propose an unsupervised dimensionality reduction algorithm, which they call 'multi-modal'.

Their notion of multi-modality is a different notion from the one used in my work: it means that a latent, low-dimensional variable is expressed according to a multi-modal PDF.

This is can be difficult depending the transformation function mapping the high-dimensional data into low-dimensional space. Especially linear methods, like PCA will suffer from this.

The authors focus on (mostly binary) classification. In that context, multi-modality requires complex decision boundaries.

The number of reservoir nodes in reservoir computing is typically much larger than the number of input or output neurons.

A reservoir network therefore first translates the low-dimensional input into a high-dimensional space and back into a low-dimensional space.

The transfer functions of reservoir nodes in reservoir computing is usually non-linear. Therefore, the transfer from low-to high-dimensional space is non-linear, and linearly inseparable representations in the input layer can be transferred into linearly separable representations in the reservoir layer. Training of the linear, non-recurrent output layer is therefore enough even for problems which could not be solved with a single-layer perceptron on its own.

A principal manifold can only be learned correctly using a SOM if

  • the SOM's dimensionality is the same as that of the principal manifold
  • the noise does not 'smear' the manifold too much, thus making it indistinguishable from a manifold with higher dimensionality.
  • there are enough data points to infer the manifold behind the noise.