Show Tag: semantic

Select Other Tags

Laurenti et al. found in a audio-visual color identification task that redundant, congruent, semantic auditory information (the utterance of a color word) can decrease latency in response to a stimulus (color of a circle displayed to the subject). Incongruent semantic visual or auditory information (written or uttered color word) can increase response latency. However, congruent semantic visual information (written color word) does not decrease response latency.

Ursino et al. divide models of multisensory integration into three categories:

  1. Bayesian models (optimal integration etc.),
  2. neuron and network models,
  3. models on the semantic level (symbolic models).

Words from some categories do not activate brain regions which are related to their meaning. The semantics of those words do not seem to be grounded in perception or action. Pulvermüller calls such categories and their neural representations disembodied.

Some abstract, disembodied words seem to activate areas in the brain related to emotional processing. These words may be grounded in emotion.

It seems that the representations of words can be more or less modal ie. words may be more or less abstract and thus more or less grounded in sensory, motor, or emotional areas.

The probability that two stimuli in different modalities are perceived as one multisensory stimulus generally increases with increasing semantic congruency.

Semantical congruence can influence multisensory integration.

Semantic multisensory congruence can

  • shorten reaction times,
  • lower detection thresholds,
  • facilitate visual perceptual learning.

Jack and Thurlow found that the degree to which a puppet resembled an actual speaker (whether it had eyes and a nose, whether it had a lower jaw moving with the speech etc.) and whether the lips of an actual speaker moved in synch with heard speech influenced the strength of the ventriloquism effect.

Vatakis and Spence found support for the concept of a `unity assumption' in an experiment in which participants were to judge whether a visual lip stream or an auditory utterance was presented first: Participants found this task easier if the visual and auditory stream did not match in terms of gender of voice or content, suggesting that their unity hypothesis was weak in these cases, causing them not to integrate them.

Details of instructions and quality of stimuli can influence the strength of the spatial ventriloquism effect.

People fixate on different parts of an image depending on the questions they are asked or task they are trying to accomplish.