Show Tag: development

Select Other Tags

The model of the SC due to Cuppini et al. reproduces development of

  1. multi-sensory neurons
  2. multi-sensory enhancement
  3. intra-modality depression
  4. super-additivity
  5. inverse effectiveness

Optimal multi-sensory integration is learned (for many tasks).

Congenital blindness leads to tactile and auditory stimuli activating early dorsal cortical visual areas.

Newborns track schematic, face-like visual stimuli in the periphery, up to one month of age. They start tracking such stimuli in central vision after about 2 months. and stop after 5.

According to Johnson and Morton, there are two visual pathways for face detection: the primary cortical pathway and one through SC and pulvinar.

The cortical pathway is called CONLEARN and is theorized to be plastic, whereas the sub-cortical pathway is called CONSPEC and is thought to be fixed and genetically predisposed to detect conspecific faces.

Miikulainen et al. use a hierarchical version of their SOM-based algorithm to model natural development of visual capabilities.

Retinal waves of spontaneous activity in the retina occur before photoreceptors develop.

They are thought to be involved in setting up the spatial organization of the visual pathway.

The distinct layers for each eye in LGN and V4 only arise after the initial projections from the retina are made, but, in higher mammals, before birth.

The distribution of monocular dominance in visual cortex neurons is drastically affected by monocular stimulus deprivation during early development.

Competition appears to be a major factor in organizing the visual system.

Law and Constantine-Paton transplanted eye primordia between tadpoles to create three-eyed frogs.

The additional eyes connected to the frogs' contralateral tecta and created competition of inputs which is not usually present in frogs (where the optic chiasm is perfect).

The result were tecta in which alternating stripes are responsive to input from different eyes.

Similar results result if one of the tecta is removed and both natural retinae project to the remaining tectum.

The theoretical accounts of multi-sensory integration due to Beck et al. and Ma et al. do not learn and leave little room for learning.

Thus, they fail to explain an important aspect of multi-sensory integration in humans.

Cats, if raised in an environment in which the spatio-temporal relationship of audio-visual stimuli is artificially different from natural conditions, develop spatio-temporal integration of audio-visual stimuli accordingly. Their SC neurons develop preference to audio-visual stimuli with the kind of spatio-temporal relationship encountered in the environment in which they were raised.

Response properties in mouse superficial SC neurons are not strongly influenced by experience.

How strongly SC neurons' development depends on experience (and how strongly well they are developed after birth) is different from species to species, so just because the superficial mouse SC is developed at birth, doesn't mean it is in other species (and I believe responsiveness in cats develops with experience).

Response properties of superficial SC neurons is different from those found in mouse V1 neurons.

Rucci et al.'s neural network learns how to align ICx and SC (OT) maps by means of value-dependent learning: The value signal depends on whether the target was in the fovea after a saccade.

Rucci et al.'s model of learning to combine ICx and SC maps does not take into account the point-to-point projections from SC to ICx reported later by Knudsen et al.

There are multisensory neurons in the newborn macaque monkey's deep sc.

General sensory maps (and map register) are already present in the newborn macaque monkey's deep SC (though receptive fields are large).

Maturational state of the deep SC is different between species—particularly between altricial and precocial species.

Rearing animals in darkness can result in anomalous auditory maps in their superior colliculi.

Cats, being an altricial species, are born with little to no capability of multi-sensory integration and develop first multi-sensory SC neurons, then neurons exhibiting multi-sensory integration on the neural level only after birth.

In the development of SC neurons, receptive fields are initially very large and shrink with experience.

Multisensory experience is necessary to develop normal multisensory integration.

The shift in the auditory map in ICx comes with changed projections from ICc to ICx.

There appears to be plasticity wrt. the auditory space map in the SC.

The redundancy provided my multisensory input can facilitate or even enable learning.

The lamina a retinal ganglion cell projects to in the zebrafish optic tectum does not change in the fish's early development. This is in contrast with other animals.

However, the position within the lamina does change.

It is possible that learning of saccade target selection is influenced by reward.

The question is whether this happens on the saliency- or selection side.

Xu et al. stress the point that in their cat rearing experiments, multisensory integration arises although there is no reward and no goal-directed behavior connected with the stimuli.

Xu et al. raised two groups of cats in darkness and presented one with congruent and the other with random visual and auditory stimuli. They showed that SC neurons in cats from the concruent stimulus group developed multi-sensory characteristics while the other mostly did not.

According to Patrick Winston, our mental development suddenly diverged from that of the Neanderthals and that raises two central questions: What makes us different from other primates and what is similar.

Human children often react to multi-sensory stimuli faster than they do to uni-sensory stimuli. However, the latencies they exhibit up to a certain age do not violate the race model as they do in adult humans.

Multisensory integration develops after birth in many ways.

Jasso and Triesch presented a simulated virtual reality environment for training robotic models.

Jasso and Triesch acknowledge that a simulation does not always follow exactly the laws of physics. In fact, their environment does not simulate any physics except those of human motion.

Jasso and Triesch argue that for the high-level cognition they train, lacking simulations of physics aren't a problem.

Jasso and Triesch argue that their simulated robots are not limited by the capabilities of today's robotic technology.

The SC maturates fast compared to the cortex; this is important to protect the young animal from threats in early life.

Altricial species are born with poorly developed capabilities for sensory processing.

(Some) SC neurons in the newborn cat are sensitive to tactile stimuli at birth, to auditory stimuli a few days postnatally, and to visual stimuli last.

Visual responsiveness develops in the cat first from top to bottom in the superficial layers, then, after a long pause, from top to bottom in the lower layers.

The basic topography of retinotectal projections is set up by chemical markers. This topography is coarse and is refined through activity-dependent development.

We do not know whether other sensory maps than the visual map in the SC are initially set up through chemical markers, but it is likely.

If deep SC neurons are sensitive to tactile stimuli before there are any visually sensitive neurons, then it makes sense that their retinotopic organization be guided by chemical markers.

Overt visual function occurs only starting 2-3 weeks postnatally in cats.

Overt visual function can be observed in developing kittens at the same time or before visually responsive neurons can first be found in the deep SC.

Some animals are born with deep-SC neurons responsive to more than one modality.

However, these neurons don't integrate according to Stein's single-neuron definition of multisensory integration. This kind of multisensory integration develops with experience with cross-modal stimuli.

Less is known about the motor properties of SC neurons than about the sensory properties.

Electrical stimulation of the cat SC elicits eye and body movements long before auditory or visual stimuli could have that effect.

These movements already follow the topographic organization of the SC at least roughly.

The map of auditory space in the nucleus of the inferior colliculus (ICx) is calibrated by visual experience.

Children do not integrate information the same way adults do in some tasks. Specifically, they sometimes do not integrate information optimally, where adults do integrate it optimally.

In an adapted version of Ernst and Banks' visuo-haptic height estimation paradigm, Gori et al. found that childrern under the age of 8 do not integrate visual and haptic information optimally where adults do.

Without an intact association cortex (or LIP), SC neurons cannot develop or maintain cross-modal integration.

(Neither multi-sensory enhancement nor depression.)

There is no depression in the immature SC.

Newborn children prefer to look at faces and face-like visual stimuli.

Visual cortex is not fully developed at birth in primates.

The fact that visual cortex is not fully developed at birth, but newborn children prefer face-like visual stimuli to other visual stimuli could be explained by the presence of a subcortical face-detector.

The fact that visual cortex is not fully developed at birth, but newborn children prefer face-like visual stimuli to other visual stimuli could be explained by the presence of a subcortical face-detector.

Looking behavior in newborns may be dominated by non-cortical processes.

SC has been implicated as part of a subcortical visual pathway which may drive face detection and orienting towards faces in newborns.

The subcortical visual pathway which may drive face detection and orienting towards faces in newborns hypothesized by Johnson also includes amygdala and pulvinar.

According to the hypothesis expressed by Johnson, amygdala, pulvinar, and SC together form a sub-cortical pathway which detects faces, initiates orienting movements towards faces, and activates cortical regions.

This implies that this pathway may be important for the development of the `social brain', as Johnson puts it.

Visual processing of potentially affective stimuli seems to be partially innate in primates.

If visual cues were absolutely necessary for the formation of an auditory space map, then no auditory space map should develop without visual cues. Since an auditory space map develops also in blind(ed) animals, visual cues cannot be strictly necessary.

Many localized perceptual events are either only visual or only auditory. It is therefore not plausible that only audio-visual percepts contribute to the formation of an auditory space map.

Visual information plays a role, but does not seem to be necessary for the formation of an auditory space map.

The auditory space maps developed by animals without patterned visual experience seem to be degraded only in some species (in guinea pigs and barn owls, but not in ferrets or cats).

Self-organization may play a role in organizing auditory localization independent of visual input.

Visual input does seem to be necessary to ensure spatial audio-visual map-register.

Audio-visual map registration has its limits: strong distortions of natural perception can only partially be compensated through adaptation.

Register between sensory maps is necessary for proper integration of multi-sensory stimuli.

Visual localization has much greater precision and reliability than auditory localization. This seems to be one reason for vision guiding hearing (in this particular context) and not the other way around.

It is unclear and disputed whether visual dominance in adaptation is hard-wired or a result of the quality of respective stimuli.

Multisensory integration is present in neonates to some degree depending on species (more in precocial than in altricial species), but it is subject to postnatal development and then influenced by experience.

In some instances, developing animals lose perceptual capabilities instead of gaining them due to what is called perceptual narrowing or canalization. One example are human neonates who are able to discriminate human and monkey faces at first, but only human faces later in development.

Pitti et al. claim that their model explains preference for face-like visual stimuli and that their model can help explain imitation in newborns. According to their model, the SC would develop face detection through somato-visual integration.

Pitti et al.'s claim of predicting with their model face-detectors in the developing SC and explaining preference for visual stimuli is problematic.

First, it is implausible that the somato-visual map created through multi-sensory learning would map the location at which a child sees facial features to the same neurons that respond to corresponding features in the child's own face. A child does not see a mouth where it sees a ball or hand touching its own mouth.

Secondly, their paper at least does not explain why their model should develop preference for points that correspond with their own facial features. The only reason that this could be happening I see is that their grid model of the child's face has a higher density of nodes around eyes, nose, and mouth, such that those regions are denser in the resulting map. But this would not explain why the configuration of features corresponding to a face would have higher saliency.