# Show Tag: perception

Select Other Tags

We extend ourselves using technology in the sense that we build things that give us epistemological access to parts of reality which would otherwise be beyond our reach.

This includes instruments which help us perceive the world in ways not given to us naturally, like microscopes or compasses, and machines which help us think about our theories deeper than our cognitive limitations permit.

Percepts can be processed (in certain settings) and acted upon without being conscious of them. This raises the question what is the use of consciousness.

Some people hold that consciousness is not needed for anything, but a side effect of perceptual processing.

One theory of the function of consciousness is that it is needed to integrate information from different modalities and processing centers in the brain and coordinate their activity.

It makes sense that consciousness could be important for multi-sensory integration.

A traditional model of visual processing for perception and action proposes that the two tasks rely on different visual representations. This model explains the weak effect of visual illusions like the Müller-Lyer illuson on performance in grasping tasks.

Foster et al. challenge the methodology used in a previous study by Dewar and Carey which supports the perception and action model of visual processing due to Goodale and Milner.

They do that by changing the closed visual-action loop in Dewar and Carey's study into an open one by removing visual feedback at motion onset. The result is that the effect of the illusion is there for grasping (which it wasn't in the closed-loop condition) but not (as strongly) for manual object size estimation.

Foster et al. argue that this suggests that the effect found in Dewar and Carey's study is due to continuous visual feedback.

It is hard to explain higher-level cognition solely in terms of correspondence to perception or action.

The traditional view of cognitive representation needs to be extended rather than replaced by aspects and mechanisms of correspondence to perception and action.

By combining information from different senses, one can sometimes make inferences that are not possible with information from one modality alone.

Some modalities can yield low-latency, unreliable information and others high-latency, reliable information.

Combining both can produce fast information which improves over time.

Very few perceptions are truly affected only by sensation through one sensory modality.

Multisensory input can provide redundant information on the same thing.

Redundancy reduces uncertainty and increases reliability.

The redundancy provided my multisensory input can facilitate or even enable learning.

Integrating information is a good thing.

The heminanopia that follows unilateral removal of the cortex that mediates visual behavior cannot be explained simply in classical terms of interruption of the visual behavior cannot be explained simply in classical terms of interruption of the visual radiations that serve cortical function.
Explanation fo the deficit requires a broader point of view, namely, that visual attention and perception are mediated at both forebrain and midbrain levels, which interact in their control of visually guided behavior.''

(Sprague, 1966)

Verschure summarizes version VII of his distributed adaptive control model as "a unifying theory" of perception cognition, and action. He states that it uses a learned world model in its contextual layer which biases perception processing (top-down) on the one hand, and saliency (bottom-up) on the other. Between these to appears to be what he calls the validation gate which defines matching and mismatch between world model and percepts.

Sub-threshold multisensory neurons respond directly only to one modality, however, the strength of the response is strongly influenced by input from another modality.

My theory on sub-threshold multisensory neurons: they receive only inhibitory input from the modality to which they do not directly respond in case that input is outside their receptive field; they receive no excitatory input from that modality if the stimulus is inside their RF.

Patrick Winston states that predictive simulation is enabled by considerable reuse of perceptual and motor apparatus

Patrick Winston calls perception "guided hallucination"

Deco and Rolls introduce a system that uses a trace learning rule to learn recognition of more and more complex visual features in successive layers of a neural architecture. In each layer, the specificity of the features increases together with the receptive fields of neurons until the receptive fields span most of the visual range and the features actually code for objects. This model thus is a model of the development of object-based attention.

There is the view that perception is an active process and cannot be understood without an active component.

The terms active vision',active perception', smart sensing',animate vision' are sometimes used synonymously.

Active perception and its synonyms usually refer to a sensor which can be moved to change the way it perceives the world.

The way in which the perception of the world changes when the sensor is moved physically is a source of information in addition to static perception of the world.

Kleesiek et al. use a recurrent neural network with parametric bias (RNNPB) to classify objects from the multisensory percepts induced by interacting with them.

Attention affects both early and late perceptual processing.

Divisive normalization models have explained how attention can facilitate or suppress some neurons' responses.

Sensation refers to the change of state of the nervous system induced purely by a stimulus. Perception integrates sensation with experience and training.

According to Friston, percepts are the products of recognizing the causes of sensory input and sensation'.

In order to recognize ie. to identify the causes underlying a sensation (according to Friston), one has to mentally undo the transformation from causes to sensations.

These transformations may not be invertible—for example if different causes interact in non-linear ways.

Given a generative model, it can be possible to find the most likely cause (or causes) of a sensation even if the causes interact in complex ways.

The goal of generative models is to learn representations that are economical to describe but allow the input to be reconstructed accurately'.

If the main task of cognition is generating the correct actions, then it is not important in itself to recover a perfect representation of the world from perception.

Jerome Feldman argues that the Neural Binding Problem is really four related problems and not distinguishing between them contributes to the difficulty of understanding them.

Jerome Feldman distinguishes between the following four "technical issues" that together form the binding problem: "General Considerations of Coordination", "The Subjective Unity of Perception", "Visual Feature-Binding", and "Variable Binding".

The general Binding Problem according to Jerome Feldman is really a problem of any distributed information processing system: it is difficult and sometimes impossible or intractable for a system that keeps and processes information in a distributed fashion to combine all the information available and act on it.

Jerome Feldman talks about the sub-problem of "General Considerations of Coordination" of the general Binding Problem as more or less a problem of synchronization and states that modeling efforts are well underway, taking account physiological details as spiking behavior and neuron oscillations.

The sub-problem of "Subjective Unity of Perception" according to Feldman is the problem of explaining why we experience perception as an "integrated whole" while it is processed by "largely distinct neural circuits".

Feldman relates his "Subjective Unity of Perception" to the stable world illusion.

Feldman gives a functional explanation of the stable world illusion, but he does not seem to explain "Subjective Unity of Perception".

Feldman states that enough is known about what he calls "Visual Feature Binding", so as not to call it a problem anymore.

Feldman explains Visual Feature Binding by the fact that all the features detected in the fovea usually belong together (because it is so small), and through attention. He cites Chikkerur et al.'s Bayesian model of the role of spatial and object attention in visual feature binding.

Feldman states that "Neural realization of variable binding is completely unsolved".

Already von Helmholtz formulated the idea that prior knowledge---or expectations---are fused with sensory information into perception.

This idea is at the core of Bayesian theory.