Show Reference: "Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning"

Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning PLoS ONE, Vol. 6, No. 7. (5 July 2011), e21575, doi:10.1371/journal.pone.0021575 by Thomas H. Weisswange, Constantin A. Rothkopf, Tobias Rodemann, Jochen Triesch
    abstract = {Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference.},
    author = {Weisswange, Thomas H. and Rothkopf, Constantin A. and Rodemann, Tobias and Triesch, Jochen},
    day = {5},
    doi = {10.1371/journal.pone.0021575},
    journal = {PLoS ONE},
    month = jul,
    number = {7},
    pages = {e21575+},
    posted-at = {2011-08-18 09:25:07},
    priority = {2},
    publisher = {Public Library of Science},
    title = {Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning},
    url = {},
    volume = {6},
    year = {2011}

See the CiteULike entry for more info, PDF links, BibTex etc.

The theoretical accounts of multi-sensory integration due to Beck et al. and Ma et al. do not learn and leave little room for learning.

Thus, they fail to explain an important aspect of multi-sensory integration in humans.

Weisswange et al.'s model does not reproduce population coding.

Reward mediated learning has been demonstrated in adaptation of orienting behavior.

Possible neurological correlates of reward-mediated learning have been found.

Reward-mediated is said to be biologically plausible.

Weisswange et al's model uses a softmax function to normalize the output.

Weisswange et al. distinguish between two strategies for Bayesian multisensory integration: model averaging and model selection.

The model averaging strategy computes the posterior probability for the position of the signal source, taking into account the possibility that the stimuli had the same source and the possibility that they had two distinct sources.

The model selection strategy computes the most likely of these two possibilities. This has been called causal inference.

Weisswange et al. model learning of multisensory integration using reward-mediated / reward-dependent learning in an ANN, a form of reinforcement learning.

They model a situation similar to the experiments due to Neil et al. and K├Ârding et al. in which a learner is presented with visual, auditory, or audio-visual stimuli.

In each trial, the learner is given reward depending on the accuracy of its response.

In an experiment where stimuli could be caused by the same or different sources, Weisswange found that their model behaves similar to both model averaging or model selection, although slightly more similar to the former.

Weisswange et al. apply the idea of Bayesian inference to multi-modal integration and action selection. They show that online reinforcement learning can effectively train a neural network to approximate a Q-function predicting the reward in a multi-modal cue integration task.