Show Tag: modelling

Select Other Tags

Simulations are different from experiments on the `real thing', but that is true also of all other kinds of theoretical model.

Computer simulations have benefits over empirical experiments:

  • wide ranges of initial conditions can be tested;
  • they can be replicated exactly;
  • they can be performed where the corresponding experiment would be impossible or unfeasible;
  • they are Gedankenexperimente without the psychological biases (well, somewhat);
  • they are more amenable to in-depth inspection regarding satisfaction of assumptions—code can be validated, reality cannot;
  • they can be used to guide analytical research;

Models of real entities at lower dimensionalities than those entities can show certain qualitative features of these entities. However, sometimes they don't.

Models of real entities which have lower resolution than all the processes taking place in these entities can show certain qualitative features of these entities. However, sometimes they don't.

A simple, somewhat lacking definition of computational science is, according to Humphreys:

``computational science consists in the development, exploration, and implementation of computational models of nonmathematical systems using concrete computational devices.''

According to Hartmann,

``A model is called dynamic, if it... includes assumptions about the time-evolution of the system. ... Simulations are closely related to dynamic models. More concretely, a simulation results when the equations of the underlying dynamic model are solved. This model is designed to imitate the time evolution of a real system. To put it another way, a simulation imitates one process by another process. In this definition, the term `process’ refers solely to some object or system whose state changes in time. If the simulation is run on a computer, it is called a computer simulation.''

According to Humphreys, the difference between a simulation and a representation or computational model is that it the formulae are evaluated; The formula for an elliptic curve together with parameters (and initial conditions) is a representation of a planetary orbit and a specialized subset of Newtonian physics plus data is a computational model of it. But only the model plus solutions to the formulae for a finite number of time steps is a simulation. (My examples.)

A computational model has six components, according to Humphreys:

  1. A computational template', together with types of boundary and initial conditions—thebasic computational form';
  2. Construction assumptions;
  3. Correction set;
  4. An interpretation;
  5. Initial Justification of the template;
  6. An output representation.

A simulation can be thought of as a thought experiment: Given a correct mathematical model of something, it tries out how that model behaves and translates (via the output representation and interpretation) the behavior back into the realm of the real world.

I would add that a model need not be correct if the simulation is to test the correctness of a model. Then, the thought experiment is to test the hypothesis that the model indeed is correct for the object or process of which it is supposed to be a model by generating predictions (solutions to the mathematical model). Those predictions are then compared to existing behavioral data of the object or process being modeled.

A computer simulation then is a thought experiment carried out by a computer.

A computational model according to Humphreys seems to me to be a computational theory (in the logician's sense) and a manual for applying it to the world.

Bayesian models have been used to model natural cognition.

Behrens et al. found that humans take into account the volatility of reward probabilities in a reinforcement learning task.

The way they took the volatility into account was qualitatively modelled by a Bayesian learner.

A model is a substitution of variables in a theory by objects (individuals) which satisfies all the theory's sentences.

Verschure argues that models of what he calls the mind, brain, body nexus should particularly account for data about the behavior at the system level, ie. overt behavior. He calls this convergent validation

In Verschure's concept of convergent validation, the researcher does not seek inspiration for but constraints for falsification or validation of models in nature.

Mommy, where do models come from?

Should models be informed by normative theories like Bayesian or decision theory?

SOMs have been used to model biology.

Adams et al. state that others have used SOM-like algorithms for modelling biology and for robotic applications, before (and list examples).

Chalk et al. hypothesize that biological cognitive agents learn a generative model of sensory input and rewards for actions.

Hinoshita et al. argue that by watching language learning in RNNs, we can learn about how the human brain might self-organize to learn language.

According to Sun, a computational cognitive model is a theory of cognition which describes mechanisms and processes of cognition computationally and thus is `runnable'.

Computational cognitive models are runnable and produce behavior and can therefore be validated, according to Sun, by comparison to human data.

Computational cognitive models can reproduce human behavior

  • roughly,
  • qualitatively,
  • quantitatively.

Sun states:

Any amount of detail of a "mechanism" [...] (provided that it is Turing computable) can be described in an algorithm, while it may not be the case that it can be described through mathematical equations (that is to say, algorithms are more expressive).

However, $\mu$-recursive functions are Turing-complete and they can be expressed in mathematical equations.

Actually, I believe most computational cognitive models (which is what Sun writes about) can be expressed in relatively simple, though long, recursive equations.

I would argue that algorithms are more accessible than $\mu$-recursive functions as a way to explain certain things.

Some things are just easier thought of in terms of manipulations than of equations. But this does not say anything about the expressiveness of either tool.

Also, algorithms are already 'runnable' and need to translation into computer programs to be studied by computational methods.

Sun argues that computational cognitive models provide productive rather than just descriptive accounts of cognitive phenomenology and therefore have more explanatory value.

A problem with this thought is that Sun subscribes to constructive empiricism which does not hold that all unobservable entities featuring in a theory truly exist. Since these entities take part in the production of the phenomenology, it is unclear what is the explanatory value of the theories being productive.

According to Sun, it has been argued that models and simulations are only tools to study theories, not theories themselves.

The scientific value of models and simulations has been questioned.

When translating a cognitive theory on the verbal-conceptual level to a computational model, one has to flesh out the description of the model by making decisions.

Some of those decisions are, as Sun says, 'just to make the simulation run', ie. they are arbitrary but consistent with the theory.

When translating a cognitive theory on the verbal-conceptual level to a computational model, one often discovers logical gaps in the original theory.

When translating a cognitive theory on the verbal-conceptual level to a computational model, one often discovers logical gaps in the original theory.

Sun argues that a computational model for a verbal-conceptual theory in cognitive science is a theory in itself because it is more specific.

Strictly speaking, every parameterization of an algorithm realizing a computational model distinct from every other parameterization, following Sun's argument.

Sun argues that the failure of one computational model which is a more specific version of a verbal-conceptual theory does not invalidate the theory, especially if a different computational model specifying that theory produces phenomenology consistent with empirical data.

Sun acknowledges the fact that certain decisions must be made, when translating from verbal-conceptual cognitive theories to computational models, which are 'just to make the simulation run'.

He does not seem to dwell on their role in the computational model as a theory; are they ontological commitments?

Computer programs can be theories of cognition: The theory represented by such a program would state that (certain) changes of state in a cognitive system are isomorphic to the changes in the computer determined by the program.

Computer programs are executable and therefore provide a rigorous way of testing their adequacy.

Computer programs can be changed ad-hoc to produce very different kinds of data (by changing production rules or parameters).

One could thus worry about overfitting.

To prevent overfitting, a computational model must be tested against enough data to counter its degrees of freedom.

Simon calls theories in psychology which make predictions by quantitatively describing structural characteristics of the brain models.

Mechanism schemata and mechanism sketches seem to me to be what is often referred to as a model in computational neuroscience.

Ursino et al. divide models of multisensory integration into three categories:

  1. Bayesian models (optimal integration etc.),
  2. neuron and network models,
  3. models on the semantic level (symbolic models).

According to Markman and Dietrich, conventional views of neural representations agree on the following five principles:

  1. Representations are state of the system which carry information about the world,
  2. some information about the world must be stored in a cognitive system and accessible without the percepts from which it was originally derived,
  3. representations use symbols,
  4. some information is represented amodally ie. independent of perception or action, and
  5. some representations are not related to the cognitive agent's embodiment (do not require embodiment).

According to Markman and Dietrich, symbolic systems are too rigid to cope well with the variability and ambiguity of situations.

According to Markman and Dietrich, some of the problems of symbolic systems are handled by models which use modal instead of amodal representations.

The theory around situated cognition holds that cognitive processes cannot be separated from context:

  • What needs to be represented internally depends on what is readily available in the environment and
  • some things are easier to check in the environment (by gathering information, trying things out) than inferred or simulated in the cognitive agent itself.

The theory of embodied cognition states that, to model natural cognition, it is necessary to build embodied cognitive agents because cognition cannot be understood out of context.

According to Markman and Dietrich, the traditional view on cognitive representations suffers from being too static in a dynamic world: There are no discrete state transitions in the world of biological cognitive agents so any model that operates on representations requiring discrete state transitions are inaccurate.

Dynamical systems have been used to model actual dynamics in cognitive systems.

There are alternative views to the traditional view of cognitive representations.

The traditional view of cognitive representation needs to be extended rather than replaced by aspects and mechanisms of correspondence to perception and action.

Neurorobotics is an activity which creates embodied cognitive agents.

Markmann and Dietrich argue against the replacement hypothesis, saying that all of the alternative approaches still assume that brain states do reflect world properties.

Krasne et al. distinguish between 'top-down' and 'bottom-up' models: Top-down models are designed to explain phenomenology. Bottom-up models are constructed from knowledge about low-level features of the object being modeled.

Bottom-up models can be used to test whether we already have most of the important features of the object being modeled; if the phenomenology is right, then probably we have, otherwise there's something missing.

Top-down mechanisms help us understand and interpret the phenomenology we see in the object being modeled.

Top-down and bottom-up models are complementary.

The fact that no long-range inhibitory/short-range excitatory connection pattern were found in in-vitro study of the rat intermediate SC by Lee might also pose a problem for divisive-normalization as a modeling assumption for the SC.

Pure neural modeling does not explain complex behavior.

Much of neural processing can be understood as compression and de-compression.

Since much of what the visual system does can be seen as compression, since SOMs can do vector quantization (VQ) and since VQ is a compression technique, it makes sense that SOMs have been useful in modeling visual processing.

Different branches of science have different, sometimes incompatible definitions of what a model is and what its relationship to a theory is.

There's a difference between showing that an instance of sensorimotor processing behaves like a Bayesian model and saying it is optimal:

The Bayesian model uses the information it has optimally, but this does not mean that it uses the right kind of information.

Ideal observer models of some task are mathematical models describing how an observer might achieve optimal results in that task under the given restrictions, most importantly under the given uncertainty.

Ideal observer models of cue integration were introduced in vision research but are now used in other uni-sensory tasks (auditory, somatosensory, proprioceptive and vestibular).

There are two strands in multi-sensory research: mathematical modeling and modeling of neurophysiology.

Yay! I'm bridging that gulf as well!

According to Ma et al,'s work, computations in neurons doing multi-sensory integration should be additive or sub-additive. This is at odds with observed neurophysiology.

Fetsch et al. explain the discrepancy between observed neurophysiology—superadditivity—and the normative solution to single-neuron cue integration proposed by Ma et al. using divisive normalization:

They propose that the network activity is normalized in order to keep neurons' activities within their dynamic range. This would lead to the apparent reliability-dependent weighting of responses found by Morgan et al. and superadditivity as described by Stanford et al.

The account of abstraction due to Hoare is that we first cluster objects according to arbitrary similarities. We then find clusters which are predictive of the future and name them. Subsequently, the similarities within such a named cluster are thought of as essential whereas the differences are perceived as unimportant.

The account of the process of abstraction due to Hoare is

  • Abstraction (selecting those properties of the real things we want to represent in our abstraction)
  • Representation (choosing symbols like words, pictograms... for the abstraction)
  • Manipulation (declaring rules for how to use the symbolic representation to predict what will happen to the real things under certain circumstances)
  • Axiomatisation (declaring rigorously the relationship between the symbols used in the representation and the properties of the real things being abstracted from)

Bayesian models cannot explain why natural cognition is not always optimal or predict bahavior in cases when it is not.

Computational models cannot predict non-functional effects, like response timing.

Purely computational, Bayesian accounts of cognition are underconstrained.

Without constrains from ecological and biological (mechanistic) knowledge, computational and evolutionary accounts of natural cognition run the risk of finding optimality wherever they look, as there will always be some combination of model and assumptions to match the data.

Bounded rationality, the idea that an organism may be as rational as possible given its limitations, can be useful, but it is prone to producing tautologies: Any organism is as rational as it can be given its limitations if those limitations are taken to be everything that limits its rationality.

Jones and Love propose three ways of `Bayesian Enlightenment'.

The components of `Bayesian Fundamentalist's' psychological models critically are not assumed to correspond to anything in the subject's mind.

Fully supervised learning algorithms are biologically implausible.

We do not know the types of functions computable by neurons.

A neural population may encode a probability density function if each neuron's response represents the probability (or log probability) of some concrete value of a latent variable.

In many Bayesian models, the prior and hypothesis space are solely chosen for the convenience of the modeler, not for their plausibility.

Normativity is one thing a computer scientist can contribute to neuroscience: explain what the brain should do and how what we find in nature implements that.

Abstraction is one thing a computer scientist can contribute to neuroscience: if you don't want to control a cat, don't use cat hardware (but be sure to use all the inspiration cat hardware can give you for your case).

Through simulations of neurons (and neuron ensembles), numbers of neurons can be monitored over time scales which both are not possible in vivo.

This is mainly an argument in favor of computational neuroscience. It is not so valid for ANN in classical AI where neuronal models are quite detached from biological neurons.

According to Rucci et al., neuroscientists can use robots to quantitatively test and analyze their theories.

The degree to which neuroscientists can draw conclusions from computational models depends on biological accuracy.

If input to biologically plausible models is too dissimilar to natural input, then that can lead to non-natural behavior of the model.

Sensory noise in robotic experiments validates a model's robustness. It is always realistic (but not necessarily natural).