# Show Tag: computational-modelling

Select Other Tags

A simple, somewhat lacking definition of computational science is, according to Humphreys:

computational science consists in the development, exploration, and implementation of computational models of nonmathematical systems using concrete computational devices.''

According to Humphreys, the difference between a simulation and a representation or computational model is that it the formulae are evaluated; The formula for an elliptic curve together with parameters (and initial conditions) is a representation of a planetary orbit and a specialized subset of Newtonian physics plus data is a computational model of it. But only the model plus solutions to the formulae for a finite number of time steps is a simulation. (My examples.)

A computational model has six components, according to Humphreys:

1. A computational template', together with types of boundary and initial conditions&mdash;thebasic computational form';
2. Construction assumptions;
3. Correction set;
4. An interpretation;
5. Initial Justification of the template;
6. An output representation.

A simulation can be thought of as a thought experiment: Given a correct mathematical model of something, it tries out how that model behaves and translates (via the output representation and interpretation) the behavior back into the realm of the real world.

A computational model according to Humphreys seems to me to be a computational theory (in the logician's sense) and a manual for applying it to the world.

Bayesian models have been used to model natural cognition.

Behrens et al. modeled learning of reward probabilities using a the model of a Bayesian learner.

Behrens et al. found that humans take into account the volatility of reward probabilities in a reinforcement learning task.

The way they took the volatility into account was qualitatively modelled by a Bayesian learner.

According to Sun, a computational cognitive model is a theory of cognition which describes mechanisms and processes of cognition computationally and thus is `runnable'.

Computational cognitive models are runnable and produce behavior and can therefore be validated, according to Sun, by comparison to human data.

Computational cognitive models can reproduce human behavior

• roughly,
• qualitatively,
• quantitatively.

Sun argues that computational cognitive models describe mechanisms and representations in cognitive science well.

Sun argues that computational cognitive models provide productive rather than just descriptive accounts of cognitive phenomenology and therefore have more explanatory value.

I would argue that algorithms are more accessible than $\mu$-recursive functions as a way to explain certain things.

Some things are just easier thought of in terms of manipulations than of equations. But this does not say anything about the expressiveness of either tool.

Also, algorithms are already 'runnable' and need to translation into computer programs to be studied by computational methods.

When translating a cognitive theory on the verbal-conceptual level to a computational model, one has to flesh out the description of the model by making decisions.

Some of those decisions are, as Sun says, 'just to make the simulation run', ie. they are arbitrary but consistent with the theory.

When translating a cognitive theory on the verbal-conceptual level to a computational model, one often discovers logical gaps in the original theory.

When translating a cognitive theory on the verbal-conceptual level to a computational model, one often discovers logical gaps in the original theory.

Sun argues that a computational model for a verbal-conceptual theory in cognitive science is a theory in itself because it is more specific.

Strictly speaking, every parameterization of an algorithm realizing a computational model distinct from every other parameterization, following Sun's argument.

Sun argues that the failure of one computational model which is a more specific version of a verbal-conceptual theory does not invalidate the theory, especially if a different computational model specifying that theory produces phenomenology consistent with empirical data.

Sun acknowledges the fact that certain decisions must be made, when translating from verbal-conceptual cognitive theories to computational models, which are 'just to make the simulation run'.

He does not seem to dwell on their role in the computational model as a theory; are they ontological commitments?

Computer programs can be theories of cognition: The theory represented by such a program would state that (certain) changes of state in a cognitive system are isomorphic to the changes in the computer determined by the program.

Computer programs are executable and therefore provide a rigorous way of testing their adequacy.

Computer programs can be changed ad-hoc to produce very different kinds of data (by changing production rules or parameters).

One could thus worry about overfitting.

To prevent overfitting, a computational model must be tested against enough data to counter its degrees of freedom.

Ghahramani et al. infer the cost function presumably guiding natural multisensory integration from behavioral data.

Ghahramani et al. model multisensory integration as a process minimizing uncertainty.