Show Reference: "Robust localization of auditory and visual targets in a robotic barn owl"

Robust localization of auditory and visual targets in a robotic barn owl Robotics and Autonomous Systems, Vol. 30, No. 1-2. (31 January 2000), pp. 181-193, doi:10.1016/s0921-8890(99)00071-8 by Michele Rucci, Jonathan Wray, Gerald M. Edelman
@article{rucci-et-al-2000,
    abstract = {In the last two decades, the barn owl, a nocturnal predator with accurate visual and auditory capabilities, has become a common experimental system for neuroscientists investigating the biological substrate of spatial localization and orienting behavior. As a result, much data are now available regarding the anatomy and physiology of many neural structures involved in such processes. On the basis of this growing body of knowledge, we have recently built a computer model that incorporates detailed replicas of several important neural structures participating in the production of orienting behavior. In order to expose this model to sensorimotor and environmental conditions similar to those experienced by a barn owl, the computer simulations of the neural structures were coupled to a robot emulating the head of a barn owl, which was presented with auditory and visual stimulation. By using this system we have performed a number of studies on the mechanisms underlying the barn owl's calibration of orienting behavior and accurate localization of auditory targets in noisy environments. In this paper we review the main results that have emerged from this line of research. This work provides a concrete example of how, by coupling computer simulations of brain structures with robotic systems, it is possible to gain a better understanding of the basic principles of biological systems while producing robust and flexible control of robots operating in the real world.},
    author = {Rucci, Michele and Wray, Jonathan and Edelman, Gerald M.},
    citeulike-article-id = {5835582},
    citeulike-linkout-0 = {http://dx.doi.org/10.1016/s0921-8890(99)00071-8},
    day = {31},
    doi = {10.1016/s0921-8890(99)00071-8},
    issn = {09218890},
    journal = {Robotics and Autonomous Systems},
    keywords = {auditory, icx, learning, localization, model, sc, sc-input, visual},
    month = jan,
    number = {1-2},
    pages = {181--193},
    posted-at = {2012-11-05 11:30:56},
    priority = {2},
    title = {Robust localization of auditory and visual targets in a robotic barn owl},
    url = {http://dx.doi.org/10.1016/s0921-8890(99)00071-8},
    volume = {30},
    year = {2000}
}

See the CiteULike entry for more info, PDF links, BibTex etc.

Rucci et al. present a robotic system based on their neural model of audiovisual localization.

Rucci et al. present an algorithm which performs auditory localization and combines auditory and visual localization in a common SC map. The mapping between the representations is learned using value-dependent learning.

Rucci et al.'s neural network learns how to align ICx and SC (OT) maps by means of value-dependent learning: The value signal depends on whether the target was in the fovea after a saccade.

Rucci et al.'s model of learning to combine ICx and SC maps does not take into account the point-to-point projections from SC to ICx reported later by Knudsen et al.

Rucci et al.'s plots of ICc activation look very similar to Jorge's IPD matrices.