Show Tag: optimality

Select Other Tags

Optimal multi-sensory integration is learned (for many tasks).

In many audio-visual localization tasks, humans integrate information optimally.

Landy et al. and Beck et al. seem to imply that optimization to natural stimuli is due to evolution. I'm sure they wouldn't disagree, though, with the idea that optimization is also partly achieved through learning---as in the case of kittens reared in unnatural sensory environments.

If natural learning (and information processing) were perfect, psychology would not need to study learning (and information processing), but the environment which would determine what we learn and how we process information.

Natural learning (and information processing) is not optimal and therefore psychology needs to study it and especially its imperfections.

Ghahramani et al. model multisensory integration as a process minimizing uncertainty.

There is a notion that humans perform (near-)optimally in many sensory tasks.

There's a difference between showing that an instance of sensorimotor processing behaves like a Bayesian model and saying it is optimal:

The Bayesian model uses the information it has optimally, but this does not mean that it uses the right kind of information.

K├Ârding and Wolpert showed that their subjects correctly learned the distribution of displacement of the visual feedback wrt. the actual position of their hand and used it in the task consistent with a Bayesian cue integration model.

Ideal observer models of some task are mathematical models describing how an observer might achieve optimal results in that task under the given restrictions, most importantly under the given uncertainty.

Ideal observer models of cue integration were introduced in vision research but are now used in other uni-sensory tasks (auditory, somatosensory, proprioceptive and vestibular).

When the error distribution in multiple estimates of a world property on the basis of multiple cues is independent between cues, and Gaussian, then the ideal observer model is a simple weighting strategy.

Children do not integrate information the same way adults do in some tasks. Specifically, they sometimes do not integrate information optimally, where adults do integrate it optimally.

In an adapted version of Ernst and Banks' visuo-haptic height estimation paradigm, Gori et al. found that childrern under the age of 8 do not integrate visual and haptic information optimally where adults do.

Bayesian models cannot explain why natural cognition is not always optimal or predict bahavior in cases when it is not.

Natural cognition is not always optimal.

Evolutionary psychology assumes that evolution has lead to ecologically optimal behavior and behavior can therefore predicted and understood by considering optimal behavior within an environment.

Without constrains from ecological and biological (mechanistic) knowledge, computational and evolutionary accounts of natural cognition run the risk of finding optimality wherever they look, as there will always be some combination of model and assumptions to match the data.

Bounded rationality, the idea that an organism may be as rational as possible given its limitations, can be useful, but it is prone to producing tautologies: Any organism is as rational as it can be given its limitations if those limitations are taken to be everything that limits its rationality.

Ernst and Banks show that humans combine visual and haptic information optimally in a height estimation task.

There can be situations where my algorithm is still optimal or near-optimal.

Alais and Burr found in an audio-visual localization experiment that the ventriloquism effect can be interpreted by a simple cue weighting model of human multi-sensory integration:

Their subjects weighted visual and auditory cues depending on their reliability. The weights they used were consistent with MLE. In most situations, visual cues are much more reliable for localization than are auditory cues. Therefore, a visual cue is given so much greater weight that it captures the auditory cue.

Human performance in combining slant and disparity cues for slant estimation can be explained by (optimal) maximum-likelihood estimation.

According to Landy et al., humans often combine cues (intra- or cross-sensory) optimally, consistent with MLE.

Beck et al. argue that sub-optimal computations in biological and artificial neural networks can amplify behavioral and perceptual variability caused by internal and external noise.

Beck et al. argue that sub-optimal computations are a greater cause of behavioral and perceptual variability than internal noise.

Optimal operations are often not feasible for complex tasks for two reasons:

  • the generative models necessary to do optimal estimation are too complex and require a lot of knowledge to create
  • applying these models is much too computationally intensive

Optimal solutions to many computational tasks in perception and action have high computational complexity (in the complexity theory sense).

Optimal operations may have high computational time complexity for the general case, but that does not mean that they cannot be carried out efficiently for ecologically relevant cases.

After all, the brain can always trade time for space by increasing parallelism.

Landy et al. specifically state that one conclusion one can draw from observing sub-optimal behavior in a biological system is that the task may be too different from the natural tasks doing which the system was shaped.

We cannot always expect optimal behavior in tasks which have become relevant only recently in human development, like eg. in complex reasoning tasks, or in tasks with highly artificial stimuli.

Tasks with high internal complexity can make it necessary to approximate optimal computations.

Such approximative computations can lead to highly suboptimal behavior even without internal or external noise.

Open-loop control (in biological sensorimotor modeling) is a simplification because most motions are not ballistic.

Cost terms that are routinely minimized in sensorimotor control are

  • Metabolic (muscular) energy consumption
  • smoothness cost (time-derivative of acceleration)
  • variance (of execution; usually assuming motor noise is control dependent)

Cost functions including multiple cost terms must be tuned by weighting, which is often arbitrarily done by the modeler.

In many instances of multi-sensory perception, humans integrate information optimally.

To estimate optimally, it is necessary to take into account the rate of each stimulus value. This is neglected by the efficient coding approach, which is recognized by the opponents.

A best estimator wrt. some loss function is an estimator that minimizes the average value of that loss function.

Given probability density functions (PDF) $P(X)$ and $P(X\mid M)$ for a latent variable $X$ and an observable $M$, an optimal estimator for $X$ wrt. the loss function $F$ is given by $$ f_{opt} = \mathrm{arg min}_f \int P(x) \int P(x\mid m) L(x,f(m))\;dx\;dm $$

Optimality of an estimator is relative to

  • loss function,
  • measurement probability,
  • prior,
  • (depending on the setting) a family of functions.

`Fundamentalist Bayesians' posit that they can predict behavior purely on the basis of optimality.

Nature has had millions of years to optimize the performance of cognitive systems. It is therefore reasonable to assume that they perform optimally wrt. natural tasks and natural conditions.

Bayesian theory provides a framework to determine optimal strategies. Therefore, it makes sense to operate under the assumption that the processes we observe in nature can be understood as implementations of Bayes-optimal strategies.