Show Tag: biorobotic

Select Other Tags

The model of natural multisensory integration and localization is based on the leaky integrate-and-fire neuron model.

Rucci et al. explain audio-visual map registration and learning of orienting responses to audio-visual stimuli by what they call value-dependent learning: After each motor response, a modulatory system evaluated whether that response was good, bringing the target into the center of the visual field of the system, or bad. The learning rule used by the system was such that it strengthened connections between neurons from the different neural subpopulations of the network if they were highly correlated whenever the modulatory response was strong, and weakened otherwise.

Casey et al. use their ANN in a robotic system for audio-visual localization.

Casey et al. focus on making their system work in real time and with complex stimuli and compromise on biological realism.

In Casey et al.'s system, ILD alone is used for SSL.

In Casey et al's experiments, the two microphones are one meter apart and the stimulus is one meter away from the center between the two microphones. There is no damping body between the microphones, but at that interaural distance and distance to the stimulus, ILD should still be a good localization cue.

Ijspert et al. show in an actual robot how the same spinal central pattern generators can produce swimming and walking behavior in a robotic model of a salamander.

Ijspert et al. use their robotic model of a salamander to test hypotheses about the neural networks that produce swimming and walking behaviors in salamanders.