Show Tag: cognitive-model

Select Other Tags

According to Sun, a computational cognitive model is a theory of cognition which describes mechanisms and processes of cognition computationally and thus is `runnable'.

Sun argues that computational cognitive models describe mechanisms and representations in cognitive science well.

Sun argues that a computational model for a verbal-conceptual theory in cognitive science is a theory in itself because it is more specific.

Strictly speaking, every parameterization of an algorithm realizing a computational model distinct from every other parameterization, following Sun's argument.

Sun argues that the failure of one computational model which is a more specific version of a verbal-conceptual theory does not invalidate the theory, especially if a different computational model specifying that theory produces phenomenology consistent with empirical data.

Computer programs can be theories of cognition: The theory represented by such a program would state that (certain) changes of state in a cognitive system are isomorphic to the changes in the computer determined by the program.

Computer programs are executable and therefore provide a rigorous way of testing their adequacy.

Computer programs can be changed ad-hoc to produce very different kinds of data (by changing production rules or parameters).

One could thus worry about overfitting.

To prevent overfitting, a computational model must be tested against enough data to counter its degrees of freedom.

It is hard to explain higher-level cognition solely in terms of correspondence to perception or action.

The traditional view of cognitive representation needs to be extended rather than replaced by aspects and mechanisms of correspondence to perception and action.

Eliasmith et al. model sensory-motor processing as task-dependent compression of sensory data and decompression of motor programs.

De Kamps and van der Velde argue for combinatorial productivity and systematicity as fundamental concepts for cognitive representations. They introduce a neural blackboard architecture which implements these principles for visual processing and in particular for object-based attention.

Purely computational, Bayesian accounts of cognition are underconstrained.

Selfridge's Pandemonium is (at least one) progenitor of all hierarchical cognitive architectures. It comprises a hierarchy of layers in which each layer detects patterns in the activity of its more primitive preceding layer.

Early work on layered architectures pre-wired all but the top-most layer and learned that.

Unsupervised learning extracts regularities in the input. Detected regularities can then be used for actual discrimination. Or unsupervised learning can be used again to detect regularities in these regularities.

Cognitive science must not only provide predictive generative models predicting natural cognitive behavior within a normative framework, but also tie in these models with theories on how the necessary computations are realised.