Select Other Tags

Fujita presents a supervised ANN model for learning to either generate a continuous time series from an input signal, or to generate a continuous function of the continuous integral of a time series.⇒

RNNPB learns sequences of inputs unsupervised (self-organized).⇒

Keogh and Lin claim that what they call time series subsequence clustering is meaningless. Specifically, this means that the clusters found by clustering all subsequences of a time series of a certain length will yield the same (kind of) clusters as clustering random sequences.⇒

Intuitively, clustering time series subsequences is meaningless for two reasons:

- The sum (or average) of all subsequences of a time series is always a straight line, and the cluster centers (in $k$-means and related algorithms) sum to the global mean. Thus, only if the
*interesting*features of a time series together average to a straight line can an algorithm find them. This, however, is not the case very often. - It should be expected from any meaningful clustering that subsequences which start at near-by time points end up in the same cluster (what Keogh and Lin call
*`trivial matches'*). However, the similarity between close-by subsequences depends highly on the rate of change around a subsequence and thus a clustering algorithm will find cluster centers close to subsequences with a low rate of change rather than with a high rate of change and therefore typically for the less interesting subsequences.⇒

Keogh and Lin propose focusing on finding motifs in time series instead of clusters.⇒