Select Other Tags

Ghahramani et al. model multisensory integration as a process minimizing uncertainty.⇒

Human performance in combining slant and disparity cues for slant estimation can be explained by (optimal) maximum-likelihood estimation.⇒

According to Landy et al., humans often combine cues (intra- or cross-sensory) optimally, consistent with MLE.⇒

To estimate optimally, it is necessary to take into account the rate of each stimulus value. This is neglected by the efficient coding approach, which is recognized by the opponents. ⇒

Statistical decision theory and Bayesian estimation are used in the cognitive sciences to describe performance in natural perception.⇒

A best estimator wrt. some loss function is an estimator that minimizes the average value of that loss function.⇒

Given probability density functions (PDF) $P(X)$ and $P(X\mid M)$ for a latent variable $X$ and an observable $M$, an optimal estimator for $X$ wrt. the loss function $F$ is given by $$ f_{opt} = \mathrm{arg min}_f \int P(x) \int P(x\mid m) L(x,f(m))\;dx\;dm $$⇒

The maxiumum a posteriori estimator (MAP) arises from an error function which penalizes all errors equally.⇒

A weakness of empirical Bayes is that the prior which explains the data best is "not necessarily the one that leads to the best estimator".⇒

Optimality of an estimator is relative to

- loss function,
- measurement probability,
- prior,
- (depending on the setting) a family of functions.⇒

A representation of probabilities is not necessary for optimal estimation.⇒

Weisswange et al. apply the idea of Bayesian inference to multi-modal integration and action selection. They show that online reinforcement learning can effectively train a neural network to approximate a Q-function predicting the reward in a multi-modal cue integration task. ⇒