Integrating information from multiple stimuli can have advantages:
MLE has been a successful model in many sensory cue integration tasks.⇒
Multiple cues are used in biological sound-source localization.⇒
Körding and Wolpert let subjects reach for some target without seeing their own hand.
In some of the trials, subjects were given visual feedback of varying reliability of their hand position briefly, halfway through the trial. In such trials where the visual feedback was clear, subjects were also given clear feedback of their hand position at the end of the trial.
The visual feedback in the middle of the trial was displayed by an amount which was distributed according to a Gaussian distribution with a mean of 1cm, or, in a second experiment, according to a bi-modal distribution.⇒
Körding and Wolpert showed that their subjects correctly learned the distribution of displacement of the visual feedback wrt. the actual position of their hand and used it in the task consistent with a Bayesian cue integration model.⇒
The study by Hartung et al. shows that (concave) hollow faces are perceived as convex faces, but flatter than they would be if they were concave, indicating that online information is combined with prior information, not discarded. ⇒
The fact that concave faces are perceived as flat convex faces can be interpreted as a case of model averaging being used in cue integration.⇒
Ideal observer models of cue integration were introduced in vision research but are now used in other uni-sensory tasks (auditory, somatosensory, proprioceptive and vestibular).⇒
When the error distribution in multiple estimates of a world property on the basis of multiple cues is independent between cues, and Gaussian, then the ideal observer model is a simple weighting strategy.⇒
MLE has been a successful model in many, but not all cue integration tasks studied.⇒
One model which might go beyond MLE in modeling cue combination is `causal inference'.⇒
If it is not given that an auditory and a visual stimulus belong together, then integrating them (binding) unconditionally is not a good idea. In that case, causal inference and model selection are better.
The a-priori belief that there is one stimulus (the `unity assumption') can then be seen as a prior for one model—the one that assumes a single, cross-modal stimulus.⇒
Sato et al. modeled multisensory integration with adaptation purely computationally. In their model, two localizations (one from each modality) were bound or not bound and localized according to a maximum a-posteriory decision rule.⇒
The unity assumption can be interpreted as a prior (if interpreted as an expectation of a forthcoming uni- or cross-sensory stimulus) or a mediator variable in a Bayesian inference model of multisensory integration.⇒
Non-spatial stimulus properties influence if and how cross-sensory stimuli are integrated.⇒
Multisensory integration in cortical VLPFC was more commonly observed for face-vocalization combinations than for general audio-visual cues.⇒
Weisswange et al. apply the idea of Bayesian inference to multi-modal integration and action selection. They show that online reinforcement learning can effectively train a neural network to approximate a Q-function predicting the reward in a multi-modal cue integration task. ⇒