# Show Tag: multi-sensory

Select Other Tags

There doesn't seem to be any region in the brain that is truly and only uni-sensory.

V1 is influenced by auditory stimuli (in different ways).

Auditory cortex is influenced by visual stimuli.

There are a number of approaches for audio-visual localization. Some with actual robots, some just as theoretical ANN or algorithmic models.

Bergan et al. show that interaction with the environment can drive multisensory learning. However, Xu et al. show that multisensory learning can also happen if there is no interaction with the multisensory world.

Multisensory integration in cortex has been studied less than in the midbrain, but there is work on that.

According to Ursino et al., there are two theories about the benefit of multisensory convergence at lower levels of cortical processing: One is that convergence helps resolve ambiguity and improves reliability. The other theory is that it helps predict perceptions.

I believe that one use of multisensory convergence, in early cortex and in sub-cortical regions, is useful because often responses do not depend on the modality but on the content. The SC, for example initiates orienting actions towards salient stimuli. It does not matter whether these are salient visual or auditory stimuli—it's always a good idea to orient towards them.

The stereotyped visuomotor flying behavior in the fly is mediated by internal states and input from other sensory modalities.

By combining information from different senses, one can sometimes make inferences that are not possible with information from one modality alone.

Some modalities can yield low-latency, unreliable information and others high-latency, reliable information.

Combining both can produce fast information which improves over time.

MLE has been a successful model in many sensory cue integration tasks.

Irrelevant auditory stimuli can dramatically improve or degrade orientation performance in visual orientation tasks:

In Wilkinson et al.'s experiments, cats' performance in orienting towards near-threshold, medial visual stimuli was much improved by irrelevant auditory stimuli close to the visual stimuli and drastically degraded by irrelevant auditory stimuli far from the visual stimuli.

If visual stimuli were further to the edge of the visual field, then lateral auditory stimuli improved their detection rate even if they were disparate.

Chemical deactivation of AES degrades both the improvement and the degradation of performance in orienting towards visual due to auditory stimuli.

There are visuo-somatosensory neurons in the putamen.

Graziano and Gross found visuo-somatosensory neurons in those regions of the putamen which code for arms and the face in somatosensory space.

Visuo-somatosensory neurons in the putamen with somatosensory RFs in the face are very selective: They seem to respond to visual stimuli consistent with an upcoming somatosensory stimulus (close-by objects approaching to the somatosensory RFs of the neurons).

Graziano and Gross report on visuo-somatosensory cells in the putamen in which remapping seems to be happening: Those cells responded to visual stimuli only when the animal could see the arm in which the somatosensory RF of those cells was located.

Multisensory neurons in AES are mostly located at the borders of unisensory regions.

AEV is not exclusively (but mostly) visual.

Multisensory input can provide redundant information on the same thing.

Redundancy reduces uncertainty and increases reliability.

The redundancy provided my multisensory input can facilitate or even enable learning.

Xu et al. stress the point that in their cat rearing experiments, multisensory integration arises although there is no reward and no goal-directed behavior connected with the stimuli.

The fact that multi-sensory integration arises without reward connected to stimuli motivates unsupervised learning approaches to SC modeling.

The precise characteristics of multi-sensory integration were shown to be sensitive to their characteristics in the experienced real world during early life.

It is interesting that multisensory integration arises in cats in experiments in which there is no goal-directed behavior connected with the stimuli as that is somewhat in contradiction to the paradigm of embodied cognition.

Xu et al. raised two groups of cats in darkness and presented one with congruent and the other with random visual and auditory stimuli. They showed that SC neurons in cats from the concruent stimulus group developed multi-sensory characteristics while the other mostly did not.

In the experiment by Xu et al., SC neurons in cats that were raised with congruent audio-visual stimuli distinguished between disparate combined stimuli, even if these stimuli were both in the neurons' receptive fields. Xu et al. state that this is different in naturally reared cats.

In the the experiment by Xu et al., SC neurons in cats that were raised with congruent audio-visual stimuli had a preferred time difference between onset of visual and auditory stimuli of 0s whereas this is around 50-100ms in normal cats.

In the the experiment by Xu et al., SC neurons in cats reacted best to auditory and visual stimuli that resembled those they were raised with (small flashing spots, broadband noise bursts), however, they generalized and reacted similarly to other stimuli.

MLE has been a successful model in many, but not all cue integration tasks studied.

One model which might go beyond MLE in modeling cue combination is causal inference'.

There are two strands in multi-sensory research: mathematical modeling and modeling of neurophysiology.

Yay! I'm bridging that gulf as well!

According to Ma et al,'s work, computations in neurons doing multi-sensory integration should be additive or sub-additive. This is at odds with observed neurophysiology.

My model is normative, performs optimally and it shows super-additivity (to be shown).

Fetsch et al. explain the discrepancy between observed neurophysiology—superadditivity—and the normative solution to single-neuron cue integration proposed by Ma et al. using divisive normalization:

They propose that the network activity is normalized in order to keep neurons' activities within their dynamic range. This would lead to the apparent reliability-dependent weighting of responses found by Morgan et al. and superadditivity as described by Stanford et al.

Multi-sensory neurons in the SC are only in the intermediate and deep layers.

Neurons that receive auditory and visual ascending input also receive (only) auditory and visual descending projections.

Most multisensory SC neurons project to brainstem and spinal chord.

There are monosynaptic excitatory AES-SC projections and McHaffie et al. state that "the predominant effect of AES on SC multisensory neurons is excitatory."

Cognitive factors can influence multisensory processing.

Semantical congruence can influence multisensory integration.

Semantic multisensory congruence can

• shorten reaction times,
• lower detection thresholds,
• facilitate visual perceptual learning.

Kleesiek et al. use a recurrent neural network with parametric bias (RNNPB) to classify objects from the multisensory percepts induced by interacting with them.

If it is not given that an auditory and a visual stimulus belong together, then integrating them (binding) unconditionally is not a good idea. In that case, causal inference and model selection are better.

The a-priori belief that there is one stimulus (the unity assumption') can then be seen as a prior for one model—the one that assumes a single, cross-modal stimulus.

Sato et al. modeled multisensory integration with adaptation purely computationally. In their model, two localizations (one from each modality) were bound or not bound and localized according to a maximum a-posteriory decision rule.

The unity assumption can be interpreted as a prior (if interpreted as an expectation of a forthcoming uni- or cross-sensory stimulus) or a mediator variable in a Bayesian inference model of multisensory integration.

Seeing someone say 'ba' and hearing them say 'ga' can make one perceive them as saying 'da'. This is called the `McGurk effect'.

In the study due to Xu et al., multi-sensory enhancement in specially-raised cats decreased gradually with distance between uni-sensory stimuli instead of occurring if and only if stimuli were present in their RFs. This is different from cats that are raised normally in which enhancement occurs regardless of stimulus distance if both uni-sensory components both are within their RF.

My explanation for different responsiveness to the individual modalities in SC neurons: They do causal inference/model selection. different neurons coding for the same point in space specialize in different stimulus (strength) combinations.

This is basically, what Anastasio and Patton's model does (except that it does not seem to make sense to me that they use the SOM's spatial organization to represent different sensory combinations).

There is multisensory integration in areas typically considered unisensory, eg. primary and secondary auditory cortex.