Show Reference: "Towards multimodal neural robot learning"

Towards multimodal neural robot learning Robotics and Autonomous Systems, Vol. 47, No. 2-3. (June 2004), pp. 171-175, doi:10.1016/j.robot.2004.03.011 by S. Wermter, C. Weber, M. Elshaw, et al.
@article{wermter-et-al-2004,
    abstract = {Learning by multimodal observation of vision and language offers a potentially powerful paradigm for robot learning. Recent experiments have shown that 'mirror' neurons are activated when an action is being performed, perceived, or verbally referred to. Different input modalities are processed by distributed cortical neuron ensembles for leg, arm and head actions. In this overview paper we consider this evidence from mirror neurons by integrating motor, vision and language representations in a learning robot.},
    author = {Wermter, S. and Weber, C. and Elshaw, M. and Panchev, C. and Erwin, H. and Pulverm\"{u}ller, F.},
    doi = {10.1016/j.robot.2004.03.011},
    issn = {09218890},
    journal = {Robotics and Autonomous Systems},
    keywords = {learning, mirror-neurons, multi-modality},
    month = jun,
    number = {2-3},
    pages = {171--175},
    posted-at = {2012-11-05 11:33:33},
    priority = {2},
    title = {Towards multimodal neural robot learning},
    url = {http://www.informatik.uni-hamburg.de/WTM/ps/randa04.pdf},
    volume = {47},
    year = {2004}
}

See the CiteULike entry for more info, PDF links, BibTex etc.