# Show Tag: sensory-maps

Select Other Tags

Early levels of the auditory pathways are tonotopic.

Sensory maps and their registration across modalities has been demonstrated in mice, cats, monkeys, guinea pigs, hamsters, barn owls, and iguanas.

Maps of sensory space in different sensory modalities can, if brought into register, give rise to an amodal representation of space.

If sensory maps of uni-modal space are brought into register, then cues from different modalities can access shared maps of motor space.

Retinal waves of spontaneous activity in the retina occur before photoreceptors develop.

They are thought to be involved in setting up the spatial organization of the visual pathway.

Competition appears to be a major factor in organizing the visual system.

The foveation hypothesis' states that the SC elicits saccades which foveate the stimuli activating it for further examination.

Rucci et al. present an algorithm which performs auditory localization and combines auditory and visual localization in a common SC map. The mapping between the representations is learned using value-dependent learning.

O'Reagan and Noë acknowledge that cortical maps containing information about the world exist.

O'Reagan and Noë deny that the existence of cortical maps explains the metric quality of visual phenomenology.

Fitting barn owls with prisms which induce a shift in where the owls see objects in their environment leads to a shift of the map of auditory space in the optic tectum.

The shift in the auditory space map in the optic tectum of owls whose visual perception was shifted by prisms is much stronger in juvenile than in mature owls.

Letting adult owls with shifted visual spatial perception hunt mice increases the amount by which the auditory space map in the owls' optic tectum is shifted (as compared to feeding them only dead mice).

Bergan et al. offer four factors which might explain the increase in shift of the auditory space maps in owls with shifted visual spatial perception:

• Hunting represents a task in which accurate map alignment is important (owls which do not hunt presumably do not face such tasks),
• more cross-modal experience (visual and auditory stimuli from the mice),
• cross-modal experiences in phases of increased attention and arousal,
• increased importance of accurate map alignment (important for feeding).

If increased importance of accurate map alignment is what causes stronger map alignment in the optic tectum of owls that hunt than in those of owls that do not hunt (with visually displacing prisms), then that could point either

• to value-based learning in the OT
• or to a role of cognitive input to the OT (hunting owls pay more attention/are more interested in audio-visual stimuli than resting or feeding owls).

General sensory maps (and map register) are already present in the newborn macaque monkey's deep SC (though receptive fields are large).

Maturational state of the deep SC is different between species—particularly between altricial and precocial species.

The topographic map of visual space in the sSC is retinotopic.

The superior colliculus is retinotopically organized.

The receptive fields of multisensory neurons in the deep SC which are close to one another are highly correlated.

Rucci et al. model learning of audio-visual map alignment in the barn owl SC. In their model, projections from the retina to the SC are fixed (and visual RFs are therefore static) and connections from ICx are adapted through value-dependent learning.

It is interesting that Rucci et al. modeled map alignment in barn owls using value-based learning so long before value based learning was demonstrated in map alignment in barn owls.

Occluding one ear early in life shifts the map of auditory space with respect to the map of visual space in barn owls. Prolonged occlusion of one ear early in life leads to a permanent realignment of the auditory map with the visual map.

fAES is not tonotopic. Instead, its neurons are responsive to spatial features of sounds. No spatial map has been found in fAES (until at least 2004).

There's a topographic map of somatosensory space in the putamen.

Electrostimulation of putamen neurons can evoke body movement consistent with the map of somatosensory space in that brain region.

Graziano and Gross found visuo-somatosensory neurons in those regions of the putamen which code for arms and the face in somatosensory space.

Visuo-somatosensory neurons in the putamen with somatosensory RFs in the face are very selective: They seem to respond to visual stimuli consistent with an upcoming somatosensory stimulus (close-by objects approaching to the somatosensory RFs of the neurons).

Graziano and Gross report on visuo-somatosensory cells in the putamen in which remapping seems to be happening: Those cells responded to visual stimuli only when the animal could see the arm in which the somatosensory RF of those cells was located.

In some SC neurons, receptive fields are not in spatial register across modalities.

Receptive fields of SC neurons in different modalities tend to overlap.

AEV is partially, but not consistently, retinotopic.

SIV is somatotopically organized.

Like many other auditory brain regions, the IC is tonotopically organized, except for ICx.

Jeffress' model predicts a spatial map of ITDs in the MSO.

Jeffress' model predicts a spatial map of ITDs in the MSO. Recent evidence seems to suggest that this map indeed exists.

Rearing barn owls in darkness results in mis-alignment of auditory and visual receptive fields in the owls' optic tectum.

Rearing barn owls in darkness results in discontinuities in the map of auditory space of the owls' optic tectum.

Rearing animals in darkness can result in anomalous auditory maps in their superior colliculi.

There is a map of auditory space in the deep superior colliculus.

The visual and auditory maps in the deep SC are in spatial register.

The map of visual space in the superficial SC of the mouse is in rough topographic register with the map formed by the tactile receptive fields of whiskers (and other body hairs) in deeper layers.

The receptive fields of certain neurons in the cat's deep SC shift when the eye position is changed. Thus, the map of auditory space in the deep SC is temporarily realigned to stay in register with the retinotopic map.

Primary somatosensory cortex is somatotopic.

Wilson and Bednar distinguish between topological feature maps and topographic maps. The topology of topographic maps tends to correspond to the spatial properties of sensory surfaces (like the retina or the skin) whereas topological feature maps correspond to the similarity of higher-order features of sensory input (like spatial frequency or orientation in vision).

Wilson and Bednar discuss the usefulness of topological feature maps, implying that they may not be useful at all but a byproduct of neural development and adaptation processes.

According to Wilson and Bednar, there are four main families of theories concerning topological feature maps:

• input-driven self-organization,
• minimal-wire length,
• place-coding theory,
• Turing pattern formation.

Wilson and Bednar argue that input-driven self-organization and turing pattern formation explain how topological maps may arise from useful processes, but they do not explain why topological maps are useful in themselves.

According to Wilson and Bednar, wire-length optimization presupposes that neurons need input from other neurons with similar feature selectivity. Under that assumption, wire length is minimized if neurons with similar selectivities are close to each other. Thus, the kind of continuous topological feature maps we see optimize wire length.

The idea that neurons should especially require input from other neurons with similar spatial receptive fields is unproblematic. However, Wilson and Bednar argue that it is unclear why neurons should especially require input from neurons with similar non-spatial feature preferences (like orientation, spatial frequency, smell, etc.).

Koulakov and Chklovskii assume that sensory neurons in cortex preferentially connect to other neurons whose feature-preferences do not differ more than a certain amount from their own feature-preferences. Further, they argue that long connections between neurons incur a metabolic cost. From this, they derive the hypothesis that the patterns of feature selectivity seen in neural populations are the result of minimizing the distance between similarly selective neurons.

Koulakov and Chklovsky show that various selectivity patterns emerge from their theorized cost minimization, given different parameterizations of preference for connections to similarly-tuned neurons.

Pooling the activity of a set of similarly-tuned neurons is useful for increasing the sharpness of tuning. A neuron which pools from a set of similarly-tuned neurons would have to make shorter connections if these neurons are close together. Thus, there is a reason why it can be useful to connect preferentially to a set of similarly-tuned neurons. This reason might be part of the reason behind topographic maps.

The uni-sensory, multi-sensory and motor maps of the superior colliculus are in spatial register.

The shift in the auditory map in ICx comes with changed projections from ICc to ICx.

There appears to be plasticity wrt. the auditory space map in the SC.

The superficial SC is visuotopic.

The part of the visual map in the superficial SC corresponding to the center of the visual field has the highest spatial resolution.

Visual receptive fields in the deeper SC are larger than in the superficial SC.

The parts of the sensory map in the deeper SC corresponding to peripheral visual space have better representation than in the visual superficial SC.

Do the parts of the sensory map in the deeper SC corresponding to peripheral visual space have better representation than in the visual superficial SC because they integrate more information; does auditory or tactile localization play a more important part in multisensory localization there?

Moving the eyes shifts the auditory and somatosensory maps in the SC.

The basic topography of retinotectal projections is set up by chemical markers. This topography is coarse and is refined through activity-dependent development.

We do not know whether other sensory maps than the visual map in the SC are initially set up through chemical markers, but it is likely.

If deep SC neurons are sensitive to tactile stimuli before there are any visually sensitive neurons, then it makes sense that their retinotopic organization be guided by chemical markers.

There's a retinotopic, polysynaptic pathway from the SC through LGN.

The external nucleus of the inferior colliculus (ICx) of the barn owl represents a map of auditory space.

The map of auditory space in the nucleus of the inferior colliculus (ICx) is calibrated by visual experience.

The optic tectum (OT) receives information on sound source localization from ICx.

Hyde and Knudsen found that there is a point-to-point projection from OT to IC.

A faithful model of the SC should probably adapt the mapping of auditory space in the SC and in another model representing ICx.

Mammals seem to have SC-IC connectivity analogous to that of the barn owl.

Hyde and Knudsen propose that the OT-IC projection conveys what they call a "template-based instructive signal" which aligns the auditory space map in ICx with the retinotopic space map in SC.

Pavlou and Casey model the SC.

They use Hebbian, competitive learning to learn and topographic mapping between modalities.

They also simulate cortical input.

There is topographic mapping even in the olfactory system.

Topographic mapping is pervasive throughout sensory-motor processing.

Some sensory-motor maps are complex: they are not a simple spatiotopic mapping, but comprise internally spatiotopic neighborhoods' which, on a much greater scale are organized spatiotopically, but across which the same point in space may be represented redundantly.

The complex structure of sensory-motor maps may be due to a mapping from a high-dimensional manifold into a two-dimensional space. This kind of map would also occur in Ring's motmaps.

Audio-visual map registration has its limits: strong distortions of natural perception can only partially be compensated through adaptation.

Register between sensory maps is necessary for proper integration of multi-sensory stimuli.

Visual localization has much greater precision and reliability than auditory localization. This seems to be one reason for vision guiding hearing (in this particular context) and not the other way around.

It is unclear and disputed whether visual dominance in adaptation is hard-wired or a result of the quality of respective stimuli.

Map alignment in the SC is expensive, but it pays off because it allows for a single interface between sensory processing and motor output generation.

LIP is retinotopic and involved in gaze shifts.

The medial intraparietal area (MIP) is retinotopic and involved in reaching.

Sensory re-mapping is often incomplete.

All visual areas from V1 to V2 and MT are retinotopic.

LIP has been suggested to contain a saliency map of the visual field, to guide visual attention, and to decide about saccades.

LGN is retinotopically organized.

The model due to Cuppini et al. develops low-level multisensory integration (spatial principle) such that integration happens only with higher-level input.

In their model, Hebbian learning leads to sharpening of receptive fields, overlap of receptive fields, and Integration through higher-cognitive input.

Pitti et al. use a Hebbian learning algorithm to learn somato-visual register.

Hebbian learning and in particular SOM-like algorithms have been used to model cross-sensory spatial register (eg. in the SC).

Bauer et al. present a SOM variant which learns the variance of different sensory modalities (assuming Gaussian noise) to model multi-sensory integration in the SC.

Bauer and Wermter use the algorithm they proposed to model multi-sensory integration in the SC. They show that it can learn to near-optimally integrate noisy multi-sensory information and reproduces spatial register of sensory maps, the spatial principle, the principle of inverse effectiveness, and near-optimal audio-visual integration in object localization.

Pitti et al.'s claim of predicting with their model face-detectors in the developing SC and explaining preference for visual stimuli is problematic.

First, it is implausible that the somato-visual map created through multi-sensory learning would map the location at which a child sees facial features to the same neurons that respond to corresponding features in the child's own face. A child does not see a mouth where it sees a ball or hand touching its own mouth.

Secondly, their paper at least does not explain why their model should develop preference for points that correspond with their own facial features. The only reason that this could be happening I see is that their grid model of the child's face has a higher density of nodes around eyes, nose, and mouth, such that those regions are denser in the resulting map. But this would not explain why the configuration of features corresponding to a face would have higher saliency.