# Show Thoughts

Jack and Thurlow found that the degree to which a puppet resembled an actual speaker (whether it had eyes and a nose, whether it had a lower jaw moving with the speech etc.) and whether the lips of an actual speaker moved in synch with heard speech influenced the strength of the ventriloquism effect.

Laurenti et al. found in a audio-visual color identification task that redundant, congruent, semantic auditory information (the utterance of a color word) can decrease latency in response to a stimulus (color of a circle displayed to the subject). Incongruent semantic visual or auditory information (written or uttered color word) can increase response latency. However, congruent semantic visual information (written color word) does not decrease response latency.

Semantic multisensory congruence can

• shorten reaction times,
• lower detection thresholds,
• facilitate visual perceptual learning.

When asked to ignore stimuli in the visual modality and attend to the auditory modality, increased activity in the auditory temporal cortex and decreased activity in the visual occipital cortex can be observed (and vice versa).

Task-irrelevant visual cues do not affect visual orienting (visual spatial attention). Task-irrelevant auditory cues, however, seem to do so.

Semantical congruence can influence multisensory integration.

In one of their experiments, Warren et al. had their subjects localize visual or auditory components of visual-auditory stimuli (videos of people speaking and the corresponding sound). Stimuli were made compelling' by playing video and audio in sync anduncompelling' by introducing a temporal offset.

They found that their subjects performed as under a unity assumptions'' when told they would perceive cross-sensory stimuli, and when the stimuli were compelling' and under a lowunity assumption'' when they were told there could be separate auditory or visual stimuli and/or the stimuli were made uncompelling'.

Cognitive factors can influence multisensory processing.