Show Tag: causal-inference

Select Other Tags

One model which might go beyond MLE in modeling cue combination is `causal inference'.

Weisswange et al. distinguish between two strategies for Bayesian multisensory integration: model averaging and model selection.

The model averaging strategy computes the posterior probability for the position of the signal source, taking into account the possibility that the stimuli had the same source and the possibility that they had two distinct sources.

The model selection strategy computes the most likely of these two possibilities. This has been called causal inference.

Weisswange et al. model learning of multisensory integration using reward-mediated / reward-dependent learning in an ANN, a form of reinforcement learning.

They model a situation similar to the experiments due to Neil et al. and K├Ârding et al. in which a learner is presented with visual, auditory, or audio-visual stimuli.

In each trial, the learner is given reward depending on the accuracy of its response.

In an experiment where stimuli could be caused by the same or different sources, Weisswange found that their model behaves similar to both model averaging or model selection, although slightly more similar to the former.

Wozny et al. distinguish between three strategies for multisensory integration: model averaging, model selection, and probability matching.

Weisswange et al.'s results seem at odds with those of Wozny et al. However, Wozny et al. state that different strategies may be used in different settings.

If it is not given that an auditory and a visual stimulus belong together, then integrating them (binding) unconditionally is not a good idea. In that case, causal inference and model selection are better.

The a-priori belief that there is one stimulus (the `unity assumption') can then be seen as a prior for one model—the one that assumes a single, cross-modal stimulus.

Sato et al. modeled multisensory integration with adaptation purely computationally. In their model, two localizations (one from each modality) were bound or not bound and localized according to a maximum a-posteriory decision rule.

The unity assumption can be interpreted as a prior (if interpreted as an expectation of a forthcoming uni- or cross-sensory stimulus) or a mediator variable in a Bayesian inference model of multisensory integration.

My explanation for different responsiveness to the individual modalities in SC neurons: They do causal inference/model selection. different neurons coding for the same point in space specialize in different stimulus (strength) combinations.

This is basically, what Anastasio and Patton's model does (except that it does not seem to make sense to me that they use the SOM's spatial organization to represent different sensory combinations).