# Show Tag: science

Select Other Tags

Simulations are used to explore intractable mathematical models or in lieu of empirical experiments which are hard or impossible to conduct for some reason and pilot experiments .

System S provides a core simulation of an object or process B just in case S is a concrete computational device that produces, via a temporal process, solutions to a computational model [...] that correctly represents B, either dynamically or statically. If in addition the computational model used by S correctly represents the structure of the real system R, then S provides a core simulation of system R with respect to B.''

A full simulation in Humphreys' sense is the combination of a core simulation with an output representation.

Graphical representations of numerical data are important to discover qualitative properties.

Visualizations of numerical data are important in science.

For humans doing research, visualizations are not just handy—they are part of the research

For humans running simulations, visualizations (output representations) are not just handy—they are actually part of the simulations because without those, humans cannot interpret the results.

One of the goals of science is human understanding.

Without human understanding, at least one goal of science is not fulfilled.

According to Humphreys, simulation is a set of techniques rather than a single tool. It includes

• numerical solution of equations,
• visualization,
• error correction on the computational methods,
• data analysis,
• model explorations

We extend ourselves using technology in the sense that we build things that give us epistemological access to parts of reality which would otherwise be beyond our reach.

This includes instruments which help us perceive the world in ways not given to us naturally, like microscopes or compasses, and machines which help us think about our theories deeper than our cognitive limitations permit.

Computational Neuroscience is computational science in neuroscience.

A computational model has six components, according to Humphreys:

1. A computational template', together with types of boundary and initial conditions&mdash;thebasic computational form';
2. Construction assumptions;
3. Correction set;
4. An interpretation;
5. Initial Justification of the template;
6. An output representation.

A simulation can be thought of as a thought experiment: Given a correct mathematical model of something, it tries out how that model behaves and translates (via the output representation and interpretation) the behavior back into the realm of the real world.

I would add that a model need not be correct if the simulation is to test the correctness of a model. Then, the thought experiment is to test the hypothesis that the model indeed is correct for the object or process of which it is supposed to be a model by generating predictions (solutions to the mathematical model). Those predictions are then compared to existing behavioral data of the object or process being modeled.

A computational model according to Humphreys seems to me to be a computational theory (in the logician's sense) and a manual for applying it to the world.

Computational cognitive models are runnable and produce behavior and can therefore be validated, according to Sun, by comparison to human data.

Computational cognitive models can reproduce human behavior

• roughly,
• qualitatively,
• quantitatively.

According to Sun (and Wikipedia), Realists believe that unobservable entities in scientific theories really do exist—they are just unobservable.

To Constructive Empiricists, however, accepting a theory only means believing in the existence of the observable parts.

Thus, in Quine's terminology, Realists' ontological commitments include the unobservables, whereas Constructive Empiricists' commitments don't.

Sun argues that mechanisms and representations (and thus computational models) are an important and necessary part of scientific theories and that that is true especially in cognitive science.

Sun argues that computational cognitive models provide productive rather than just descriptive accounts of cognitive phenomenology and therefore have more explanatory value.

I would argue that algorithms are more accessible than $\mu$-recursive functions as a way to explain certain things.

Some things are just easier thought of in terms of manipulations than of equations. But this does not say anything about the expressiveness of either tool.

Also, algorithms are already 'runnable' and need to translation into computer programs to be studied by computational methods.

The scientific value of models and simulations has been questioned.

When translating a cognitive theory on the verbal-conceptual level to a computational model, one has to flesh out the description of the model by making decisions.

Some of those decisions are, as Sun says, 'just to make the simulation run', ie. they are arbitrary but consistent with the theory.

When translating a cognitive theory on the verbal-conceptual level to a computational model, one often discovers logical gaps in the original theory.

When translating a cognitive theory on the verbal-conceptual level to a computational model, one often discovers logical gaps in the original theory.

Sun argues that a computational model for a verbal-conceptual theory in cognitive science is a theory in itself because it is more specific.

Strictly speaking, every parameterization of an algorithm realizing a computational model distinct from every other parameterization, following Sun's argument.

Sun argues that the failure of one computational model which is a more specific version of a verbal-conceptual theory does not invalidate the theory, especially if a different computational model specifying that theory produces phenomenology consistent with empirical data.

According to Sun, one, if not the, aim of science is to come up with descriptions of phenomenology with lower and lower Kolmogorov complexity.

Framing the aim of science as coming up with descriptions of phenomenology with lower and lower Kolmogorov complexity is a very instrumentalist view.

Sun acknowledges the fact that certain decisions must be made, when translating from verbal-conceptual cognitive theories to computational models, which are 'just to make the simulation run'.

He does not seem to dwell on their role in the computational model as a theory; are they ontological commitments?

Schools in psychology have been thought of replacing each other. Instead, Simon argues that they build on top of each other and psychology is incremental instead of revolutionary.

The way to describe a dynamical system in time is to describe the rules for state transitions from one moment to another.

Human cognition is dynamic.

The way to describe human cognition is by describing how its state changes from one moment to another, given input.

Theories of the same things at different levels of resolution are necessary.

Scientific theories usually presuppose entities below their level of resolution.

Scientific theories must often presuppose entities below their level of resolution even if those entities are not part of any lower-resolution theory; for some theories there is no such lower-resolution theory.

Scientists do not always flesh out their theories in full. They often only describe those parts they are interested in (and leave the rest in an abstract form). Machamer et al. call descriptions of mechanisms which leave detailed specification of some of their activities and entities mechanism schemata.

Mechanism schemata and mechanism sketches seem to me to be what is often referred to as a model in computational neuroscience.

Devising a mechanism schema can guide scientific progress: Presupposing an activity for necessity can make one look for the entity that can perform that activity.

Machamer et al. call a mechanism schema with explicitly missing parts a mechanism sketch.

Since speech happens in a brain which is part of a body in a physical world, it is without doubt possible to describe it in terms of world-body-brain dynamics.

The question is whether that is a good way of describing it. Such a description may be very complex and difficult to handle—it might run against what explanation in science is supposed to do.

Some of the multisensory properties of the SC were known in the early seventies, to be re-discovered again much later (in lethal animal experiments).

In hypothesis testing, we usually know that neither the null hypothesis nor the alternative hypothesis can be fully true. They can at best be an approximation to ie. different from reality. However, the procedure of hypothesis testing consists of testing which of the two is more likely to be true given a sample—not which of the two is the better approximation. Thus, strictly speaking, we're usually applying hypothesis testing to problems the theory was not designed for.

Since brains are just things that evolved out of a need for efficient information processing, all mechanisms in it can be interpreted as emergent phenomena. Taking a normative stance and attributing a cause to them can be enlightening. It is a matter of scientific pragmatism whether one wants to look at a specific phenomenon in terms of why it evolved or what problem it solves, or (often) both.

Krasne et al. distinguish between 'top-down' and 'bottom-up' models: Top-down models are designed to explain phenomenology. Bottom-up models are constructed from knowledge about low-level features of the object being modeled.

Bottom-up models can be used to test whether we already have most of the important features of the object being modeled; if the phenomenology is right, then probably we have, otherwise there's something missing.

Top-down mechanisms help us understand and interpret the phenomenology we see in the object being modeled.

Top-down and bottom-up models are complementary.

It's probably better to run in the rain than to walk—it seems you get less wet.

"The intention and the result of a scientific inquiry is to obtain an understanding and control of some part of the universe."

"No substantial part of the universe is so simple that it can be grasped and controlled without abstraction."

"the best material model for a cat is another, or preferably the same cat."

A theoretical model of (a significant part of) the world would have comparable complexity as (that part of) the world and we would be unable to understand and use it.

For biorobotic experiments to mean something, it is necessary to identify those parts of a biorobotic model which are robotic and those which model biology.

The design of a biorobotic experiment implicitly includes parts of the hypothesis being tested—those need to be made explicit.