Show Tag: normativity

Select Other Tags

Since brains are just things that evolved out of a need for efficient information processing, all mechanisms in it can be interpreted as emergent phenomena. Taking a normative stance and attributing a cause to them can be enlightening. It is a matter of scientific pragmatism whether one wants to look at a specific phenomenon in terms of why it evolved or what problem it solves, or (often) both.

Ideal observer models of some task are mathematical models describing how an observer might achieve optimal results in that task under the given restrictions, most importantly under the given uncertainty.

Ideal observer models of cue integration were introduced in vision research but are now used in other uni-sensory tasks (auditory, somatosensory, proprioceptive and vestibular).

When the error distribution in multiple estimates of a world property on the basis of multiple cues is independent between cues, and Gaussian, then the ideal observer model is a simple weighting strategy.

The components of `Bayesian Fundamentalist's' psychological models critically are not assumed to correspond to anything in the subject's mind.

Cognitive science must not only provide predictive generative models predicting natural cognitive behavior within a normative framework, but also tie in these models with theories on how the necessary computations are realised.

If the main task of cognition is generating the correct actions, then it is not important in itself to recover a perfect representation of the world from perception.

Computational theories of the brain account not only for how it works, but why it should work that way.

Marr effectively argues normativity:

"... gone is any explanation in terms of neurons—except as a way of implementing a method. And present is a clear understanding of what is to be computed, how it is to be done, the physical assumptions on which the method is to be based, and some kind of analysis of algorithms that are capable of carrying it out."

A heuristic program that solves some task is not a theory of that task! Theoretical analysis of the task and its domain is necessary!

When studying an information-processing system, and given a computational theory of it, algorithms and representations for implementing it can be designed, and their performance can be compared to that of natural processing.

If the performance is similar, that supports our computational theory.

Love and Jones accuse `Bayesian Fundamentalism' of focussing too much on the computational theory and neglecting more biologically constrained levels of understanding cognition.

There often are multiple computational theories of a given problem, differing in assumptions on hardware and problem.

Computational theories of cognition alone are underconstrained.

`Bayesian Fundamentalism', like Behaviorism and evolutionary psychology, explain behavior purely from the point of view of the environment---they completely ignore the inner workings of the organism.

In many Bayesian models, the prior and hypothesis space are solely chosen for the convenience of the modeler, not for their plausibility.

Normativity is one thing a computer scientist can contribute to neuroscience: explain what the brain should do and how what we find in nature implements that.