Show Tag: sensorimotor

Select Other Tags

It seems that the representations of words can be more or less modal ie. words may be more or less abstract and thus more or less grounded in sensory, motor, or emotional areas.

Sometimes, the best (fastest, least-suboptimal, most effortless etc.) response to a stimulus can be generated relatively directly from the way the world interacts with the body with little or no neural processing in between.

Electrostimulation of putamen neurons can evoke body movement consistent with the map of somatosensory space in that brain region.

In the Simon task, subjects are required to respond to a stimulus with a response that is spatially congruent or incongruent to that stimulus: They have, for example, to press a button with the left hand in response to a stimulus which is presented either on the left or on the right. Congruent responses (stimulus on the left—respond by pressing a button with the left hand) is usually faster than an incongruent response.

Yan et al. present a system which uses auditory and visual information to learn an audio-motor map (in a functional sense) and orient a robot towards a speaker. Learning is online.

Yan et al. do not evaluate the accuracy of audio-visual localization.

Yan et al. report an accuracy of auditory localization of $3.4^\circ$ for online learning and $0.9^\circ$ for offline calibration.

Yan et al. perform sound source localization using both ITD and ILD. Some of their auditory processing is bio-inspired.

Eliasmith et al. model sensory-motor processing as task-dependent compression of sensory data and decompression of motor programs.

Körding and Wolpert let subjects reach for some target without seeing their own hand.

In some of the trials, subjects were given visual feedback of varying reliability of their hand position briefly, halfway through the trial. In such trials where the visual feedback was clear, subjects were also given clear feedback of their hand position at the end of the trial.

The visual feedback in the middle of the trial was displayed by an amount which was distributed according to a Gaussian distribution with a mean of 1cm, or, in a second experiment, according to a bi-modal distribution.

Körding and Wolpert showed that their subjects correctly learned the distribution of displacement of the visual feedback wrt. the actual position of their hand and used it in the task consistent with a Bayesian cue integration model.

Zhao et al. propose a model which develops perception and behavior in parallel.

Their motivation is the embodiment idea stating that perception and behavior develop in behaving animals

Disparity-selective cells in visual cortical neurons have preferred disparities of only a few degrees whereas disparity in natural environments ranges over tens of degrees.

The possible explanation offered by Zhao et al. assumes that animals actively keep disparity within a small range, during development, and therefore only selectivity for small disparity develops.

Zhao et al. present a model of joint development of disparity selectivity and vergence control.

Zhao et al.'s model develops both disparity selection and vergence control in an effort to minimize reconstruction error.

It uses a form of sparse-coding to learn to approximate its input and a variation of the actor-critic learning algorithm called natural actor critic reinforcement learning algorithm (NACREL).

The teaching signal to the NACREL algorithm is the reconstruction error of the model after the action produced by it.

Overt visual function occurs only starting 2-3 weeks postnatally in cats.

Explicit or implicit representations of world dynamics are necessary for optimal controllers since they have to anticipate state changes before the arrival of the necessary sensor data.