Two stimuli in different modalities are perceived as one multi-sensory stimulus if the position in space and point time at which they are presented are not too far apart.⇒
O'Regan and Noë argue that there is not an illusion that there is a "stable, high-resolution, full field representation of a visual scene" in the brain, but that people have the impression of being aware of everything in the scene.
The difference is that we would not need a photograph-like representation in the brain to be aware of all the details even if we were aware of it.⇒
The temporal correlation hypothesis has been identified as a candidate mechanism for a neural solution of the binding problem.⇒
The unity assumption is influenced by exogenous and endogenous factors:
The probability that two stimuli in different modalities are perceived as one multisensory stimulus generally increases with increasing semantic congruency.⇒
In a sensorimotor synchronization task, Aschersleben and Bertelson found that an auditory distractor biased the temporal perception of a visual target stimulus more strongly than the other way around.⇒
De Kamps and van der Velde argue for combinatorial productivity and systematicity as fundamental concepts for cognitive representations. They introduce a neural blackboard architecture which implements these principles for visual processing and in particular for object-based attention.⇒
De Kamps and van der Velde introduce a neural blackboard architecture for representing sentence structure.⇒
De Kamps and van der Velde use their blackboard architecture for two very different tasks: representing sentence structure and object attention.⇒
Jack and Thurlow found that the degree to which a puppet resembled an actual speaker (whether it had eyes and a nose, whether it had a lower jaw moving with the speech etc.) and whether the lips of an actual speaker moved in synch with heard speech influenced the strength of the ventriloquism effect.⇒
The ``unity assumption'' is the hypothesized unconscious assumption (or the belief) of an observer that stimuli in different modalities representing a single cross-sensory object.⇒
In one of their experiments, Warren et al. had their subjects localize visual or auditory components of visual-auditory stimuli (videos of people speaking and the corresponding sound).
Stimuli were made
compelling' by playing video and audio in sync anduncompelling' by introducing a temporal offset.
They found that their subjects performed as under a
unity assumptions'' when told they would perceive cross-sensory stimuli, and when the stimuli were `compelling' and under a lowunity assumption'' when they were told there could be separate auditory or visual stimuli and/or the stimuli were made `uncompelling'.⇒
Vatakis and Spence found support for the concept of a `unity assumption' in an experiment in which participants were to judge whether a visual lip stream or an auditory utterance was presented first: Participants found this task easier if the visual and auditory stream did not match in terms of gender of voice or content, suggesting that their unity hypothesis was weak in these cases, causing them not to integrate them.⇒
If it is not given that an auditory and a visual stimulus belong together, then integrating them (binding) unconditionally is not a good idea. In that case, causal inference and model selection are better.
The a-priori belief that there is one stimulus (the `unity assumption') can then be seen as a prior for one model—the one that assumes a single, cross-modal stimulus.⇒
With increasing distance between stimuli in different modalities, the likelihood of perceiving them as in one location decreases.⇒
With increasing distance between stimuli in different modalities, the likelihood of perceiving them as one cross-modal stimulus decreases.
In other words, the unity assumption depends on the distance between stimuli.⇒
In an audio-visual localization task, Wallace et al. found that their subjects' localization of the auditory stimulus were usually biased towards the visual stimulus whenever the two stimuli were perceived as one and vice-versa.⇒
Details of instructions and quality of stimuli can influence the strength of the spatial ventriloquism effect.⇒
Sato et al. modeled multisensory integration with adaptation purely computationally. In their model, two localizations (one from each modality) were bound or not bound and localized according to a maximum a-posteriory decision rule.⇒
The unity assumption can be interpreted as a prior (if interpreted as an expectation of a forthcoming uni- or cross-sensory stimulus) or a mediator variable in a Bayesian inference model of multisensory integration.⇒
The temporal binding model implies that related activity of neurons across populations leads to binding of different aspects of stimuli.⇒
According to the temporal binding theory, top-down control is realized by top-down influences on synchronization and oscillations in activity. Engel et al. call this model of top-down control the `dynamicist' notion.⇒
The temporal binding theory does not rely on a hierarchical architecture.⇒
There is an illusion that there is a "stable, high-resolution, full field representation of a visual scene" in the brain.⇒
Could the illusion that there is a "stable, high-resolution, full field representation of a visual scene" in the brain be the result of the availability heuristic? Whenever we are interested in some point in a visual scene, it is either at the center of our vision anyway, or we saccade to it. In both cases, detailed information of that scene is available almost instantly.
This seems to be what O'Regan and Noë imply (although they do not talk about the availability heuristic).⇒
Jerome Feldman argues that the Neural Binding Problem is really four related problems and not distinguishing between them contributes to the difficulty of understanding them.⇒
Jerome Feldman distinguishes between the following four "technical issues" that together form the binding problem: "General Considerations of Coordination", "The Subjective Unity of Perception", "Visual Feature-Binding", and "Variable Binding".⇒
The general Binding Problem according to Jerome Feldman is really a problem of any distributed information processing system: it is difficult and sometimes impossible or intractable for a system that keeps and processes information in a distributed fashion to combine all the information available and act on it.⇒
Jerome Feldman talks about the sub-problem of "General Considerations of Coordination" of the general Binding Problem as more or less a problem of synchronization and states that modeling efforts are well underway, taking account physiological details as spiking behavior and neuron oscillations.⇒
The sub-problem of "Subjective Unity of Perception" according to Feldman is the problem of explaining why we experience perception as an "integrated whole" while it is processed by "largely distinct neural circuits".⇒
Feldman relates his "Subjective Unity of Perception" to the stable world illusion.⇒
Feldman gives a functional explanation of the stable world illusion, but he does not seem to explain "Subjective Unity of Perception".⇒
Feldman states that enough is known about what he calls "Visual Feature Binding", so as not to call it a problem anymore.⇒
Feldman explains Visual Feature Binding by the fact that all the features detected in the fovea usually belong together (because it is so small), and through attention. He cites Chikkerur et al.'s Bayesian model of the role of spatial and object attention in visual feature binding.⇒
Feldman states that "Neural realization of variable binding is completely unsolved".⇒
Feldman dismisses de Kamps' and van der Velde's approaches to neural variable binding stating that they don't work for the general case "where new entities and relations can be dynamically added".⇒
According to Friedman, Hummel divides binding architectures into multiplicative and additive ones.⇒