Show Reference: "Modeling the Minimal Newborn's Intersubjective Mind: The Visuotopic-Somatotopic Alignment Hypothesis in the Superior Colliculus"

Modeling the Minimal Newborn's Intersubjective Mind: The Visuotopic-Somatotopic Alignment Hypothesis in the Superior Colliculus PLoS ONE, Vol. 8, No. 7. (26 July 2013), e69474, doi:10.1371/journal.pone.0069474 by Alexandre Pitti, Yasuo Kuniyoshi, Mathias Quoy, Philippe Gaussier
@article{pitti-et-al-2013,
    abstract = {The question whether newborns possess inborn social skills is a long debate in developmental psychology. Fetal behavioral and anatomical observations show evidences for the control of eye movements and facial behaviors during the third trimester of pregnancy whereas specific sub-cortical areas, like the superior colliculus ({SC}) and the striatum appear to be functionally mature to support these behaviors. These observations suggest that the newborn is potentially mature for developing minimal social skills. In this manuscript, we propose that the mechanism of sensory alignment observed in {SC} is particularly important for enabling the social skills observed at birth such as facial preference and facial mimicry. In a computational simulation of the maturing superior colliculus connected to a simulated facial tissue of a fetus, we model how the incoming tactile information is used to direct visual attention toward faces. We suggest that the unisensory superficial visual layer (eye-centered) and the deep somatopic layer (face-centered) in {SC} are combined into an intermediate layer for visuo-tactile integration and that multimodal alignment in this third layer allows newborns to have a sensitivity to configuration of eyes and mouth. We show that the visual and tactile maps align through a Hebbian learning stage and and strengthen their synaptic links from each other into the intermediate layer. It results that the global network produces some emergent properties such as sensitivity toward the spatial configuration of face-like patterns and the detection of eyes and mouth movement.},
    author = {Pitti, Alexandre and Kuniyoshi, Yasuo and Quoy, Mathias and Gaussier, Philippe},
    day = {26},
    doi = {10.1371/journal.pone.0069474},
    journal = {PLoS ONE},
    keywords = {development, face-detection, learning, model, sc, visual-processing},
    month = jul,
    number = {7},
    pages = {e69474+},
    posted-at = {2013-08-21 02:22:43},
    priority = {2},
    publisher = {Public Library of Science},
    title = {Modeling the Minimal Newborn's Intersubjective Mind: The {Visuotopic-Somatotopic} Alignment Hypothesis in the Superior Colliculus},
    url = {http://dx.doi.org/10.1371/journal.pone.0069474},
    volume = {8},
    year = {2013}
}

See the CiteULike entry for more info, PDF links, BibTex etc.

Pitti et al. use a Hebbian learning algorithm to learn somato-visual register.

Pitti et al. claim that their model explains preference for face-like visual stimuli and that their model can help explain imitation in newborns. According to their model, the SC would develop face detection through somato-visual integration.