Show Tag: research

Select Other Tags

Graphical representations of numerical data are important to discover qualitative properties.

For humans doing research, visualizations are not just handy—they are part of the research

For humans running simulations, visualizations (output representations) are not just handy—they are actually part of the simulations because without those, humans cannot interpret the results.

According to Humphreys, simulation is a set of techniques rather than a single tool. It includes

  • numerical solution of equations,
  • visualization,
  • error correction on the computational methods,
  • data analysis,
  • model explorations

Simple systems can be hard to predict quantitatively without numerical techniques.

We extend ourselves using technology in the sense that we build things that give us epistemological access to parts of reality which would otherwise be beyond our reach.

This includes instruments which help us perceive the world in ways not given to us naturally, like microscopes or compasses, and machines which help us think about our theories deeper than our cognitive limitations permit.

Computational Neuroscience is computational science in neuroscience.

A computational model has six components, according to Humphreys:

  1. A computational template', together with types of boundary and initial conditions—thebasic computational form';
  2. Construction assumptions;
  3. Correction set;
  4. An interpretation;
  5. Initial Justification of the template;
  6. An output representation.

A simulation can be thought of as a thought experiment: Given a correct mathematical model of something, it tries out how that model behaves and translates (via the output representation and interpretation) the behavior back into the realm of the real world.

I would add that a model need not be correct if the simulation is to test the correctness of a model. Then, the thought experiment is to test the hypothesis that the model indeed is correct for the object or process of which it is supposed to be a model by generating predictions (solutions to the mathematical model). Those predictions are then compared to existing behavioral data of the object or process being modeled.

A computational model according to Humphreys seems to me to be a computational theory (in the logician's sense) and a manual for applying it to the world.

Adams et al. argue that, since the brain is fast and requires little energy, researching biomimetic solutions can help solve the problems that robots have limited energy resources and computing power.

Biomimetic approaches have been ascribed various benefits.

Braitenberg postulates the "the law of uphill analysis and downhill invention", which states that it is easier to build something and see what it does (what it can do) than to analyse something just from the observable output.

As Dennett points out in his review of Braitenberg's book, just assuming that the mind and the brain and are the same thing (loosely speeking) is all nice and well, but it does not help, because the brain is so complex that knowing all its structure will not help much in learning about the mind.

Using Braitenberg's "law of uphill analysis and downhill invention" can help, because it starts by designing simple things and seeing what behavior they exhibit.

Devising a mechanism schema can guide scientific progress: Presupposing an activity for necessity can make one look for the entity that can perform that activity.

Machamer et al. call a mechanism schema with explicitly missing parts a mechanism sketch.

According to Wilson and Golonka, there are four questions a truly embodied research programme (theory?) needs to ask:

  1. What is the task to be solved?
  2. Which (cognitive, bodily, environmental) resources does the organism have to solve the task?
  3. How can the available resources be used to solve the task?
  4. Does the organism indeed use the hypothesized resources in the hypothesized way?

Some of the multisensory properties of the SC were known in the early seventies, to be re-discovered again much later (in lethal animal experiments).

In hypothesis testing, we usually know that neither the null hypothesis nor the alternative hypothesis can be fully true. They can at best be an approximation to ie. different from reality. However, the procedure of hypothesis testing consists of testing which of the two is more likely to be true given a sample—not which of the two is the better approximation. Thus, strictly speaking, we're usually applying hypothesis testing to problems the theory was not designed for.

Research should be reproducible. This means, for computational biology (and therefore computational neuroscience), that all code and data should be retained and preferably accessible to anyone.

Pure neural modeling does not explain complex behavior.

Much of neural processing can be understood as compression and de-compression.

Biomimetics is the approach of making use of the technological and theoretical insights of the biological sciences for engineering.

Often, the quest to understand a biological system leads to the recognition of new paradigms for engineering.

Often biology has a solution to a problem in the engineering disciplines.

There have been biomimetic solutions to problems in materials sciences, mechanical sciences, sensor technology, and various problems in robotics.

There have been biomimetic solutions to various problems in robotics.

Ideal observer models of some task are mathematical models describing how an observer might achieve optimal results in that task under the given restrictions, most importantly under the given uncertainty.

Ideal observer models of cue integration were introduced in vision research but are now used in other uni-sensory tasks (auditory, somatosensory, proprioceptive and vestibular).

When the error distribution in multiple estimates of a world property on the basis of multiple cues is independent between cues, and Gaussian, then the ideal observer model is a simple weighting strategy.

There are two strands in multi-sensory research: mathematical modeling and modeling of neurophysiology.

Yay! I'm bridging that gulf as well!

Taking inspiration for technical solutions from nature promises greater robustness.

Biomimetic (neural) robotics can provide feedback to neuroscience.

Less is known about the motor properties of SC neurons than about the sensory properties.

Bayesian models cannot explain why natural cognition is not always optimal or predict bahavior in cases when it is not.

Purely computational, Bayesian accounts of cognition are underconstrained.

Evolutionary psychology assumes that evolution has lead to ecologically optimal behavior and behavior can therefore predicted and understood by considering optimal behavior within an environment.

Bayesian theory can be used to describe hypotheses and prior beliefs. These two can then be tested against actual behavior.

In contrast with `Bayesian Fundamentalism', this approach views prior and hypotheses as the scientific theory to be tested as opposed to the only (if handcrafted) way to describe the situation, which is used to see whether once again optimality can be demonstrated.

Backpropagation was discovered at least four times within one decade.

"The intention and the result of a scientific inquiry is to obtain an understanding and control of some part of the universe."

"No substantial part of the universe is so simple that it can be grasped and controlled without abstraction."

"the best material model for a cat is another, or preferably the same cat."

A theoretical model of (a significant part of) the world would have comparable complexity as (that part of) the world and we would be unable to understand and use it.

Biorobotics has been used successfully to study and sometimes validate theoretical biological claims.

Testing biological hypotheses using robots is called `biorobotics'.

Using a material model instead of the actual object of study is useful in two cases:

  • that physical model is better understood,
  • it is easier to use the model than the original.

Biorobotics is a case of using a material model to understand biological organisms for both reasons given by Rosenblueth and Wiener:

  • robots are generally better understood than the real thing (because we construct them)
  • they are easier studied for technical and for ethical reasons.

For biorobotic experiments to mean something, it is necessary to identify those parts of a biorobotic model which are robotic and those which model biology.

The design of a biorobotic experiment implicitly includes parts of the hypothesis being tested—those need to be made explicit.

Cognitive science must not only provide predictive generative models predicting natural cognitive behavior within a normative framework, but also tie in these models with theories on how the necessary computations are realised.

Tasks with high internal complexity can make it necessary to approximate optimal computations.

Such approximative computations can lead to highly suboptimal behavior even without internal or external noise.

One reason for specifically studying multi-sensory integration in the (cat) SC is that there is a well-understood connection between input stimuli and overt behavior.

What we find in the SC we can use as a guide when studying other multi-sensory brain regions.

Statistical decision theory and Bayesian estimation are used in the cognitive sciences to describe performance in natural perception.

Redundancy reduction, predictive coding, efficient coding, sparse coding, and energy minimization are related hypotheses with similar predictions. All these theories are reasonably successful in explaining biological phenomena.

"Constructing a mathematically precise account of the brain has the potential to change our view of how it works."

Computational theories of the brain account not only for how it works, but why it should work that way.

"In order to understand a device one needs many different kinds of explanations." To understand vision, one needs theories that comply with the knowledge of the common man, the brain scientist, the experimental psychologist and which can be put to practical use.

Marr effectively argues normativity:

"... gone is any explanation in terms of neurons—except as a way of implementing a method. And present is a clear understanding of what is to be computed, how it is to be done, the physical assumptions on which the method is to be based, and some kind of analysis of algorithms that are capable of carrying it out."

It is important to make the distinction between different levels of understanding something (an information processing system) explicit.

Understanding that an abstract, mathematical description of the brain as an information-processing system is part of understanding the brain as a whole, one can rationally study

  • what is being processed,
  • why it is being processed,
  • how it is processed,
  • and whether or not processing it that way is optimal.

The three levels at which any information-processing system needs to be understood are

  • computational theory
  • representation and algorithm
  • hardware implementation

According to Marr, the computational theory of an information-processing system is the theory of what it does, why it does what it does and "what is the logic of the strategy by which" what it does can be done.

Computational neuroscience is not the study of computational theories of the brain (alone): it also deals with the other two aspects of understanding neuroscience.

A heuristic program that solves some task is not a theory of that task! Theoretical analysis of the task and its domain is necessary!

Marr speaks of vision as one process, whose task is to generate `a useful description of the world'. However, there is more than one actual goal of vision (though they share similar properties) and thus there are different representations and algorithms being used in the different parts of the brain concerned with these goals.

When studying an information-processing system, and given a computational theory of it, algorithms and representations for implementing it can be designed, and their performance can be compared to that of natural processing.

If the performance is similar, that supports our computational theory.

Schroeder names two general definitions of multisensory integration: One includes any kind of interaction between stimuli from different senses, the other only integration of information about the same object of the real world from different sensory modalities.

These definitions both are definitions on the functional level as opposed to the biological level with which Stein's definition is concerned.

Multisensory integration can be thought of as a special case of integration of information from different sources---be they from one physical modality or from many.

Studying multisensory integration instead of the integration of information from different channels from the same modality tends to be easier because the stimuli can be more reliably separated in experiments.

Schroeder argues that multisensory integration is not separate from general cue integration and that information gleaned about the former can help understand the latter.

Love and Jones accuse `Bayesian Fundamentalism' of focussing too much on the computational theory and neglecting more biologically constrained levels of understanding cognition.

There often are multiple computational theories of a given problem, differing in assumptions on hardware and problem.

Computational theories of cognition alone are underconstrained.

`Bayesian Fundamentalism', like Behaviorism and evolutionary psychology, explain behavior purely from the point of view of the environment---they completely ignore the inner workings of the organism.

Connectionism used to use telegraph networks as its founding metaphor. Information processing units and physical neurons came later.

Connectionism has suffered theoretically unfounded euphorias twice.

In many Bayesian models, the prior and hypothesis space are solely chosen for the convenience of the modeler, not for their plausibility.

`Fundamentalist Bayesians' posit that they can predict behavior purely on the basis of optimality.

Jones and Love talk about Bayesian theory as psychological theories---not so much as neuroscientific theories... I guess?