Show Tag: time

Select Other Tags

According to Hartmann,

``A model is called dynamic, if it... includes assumptions about the time-evolution of the system. ... Simulations are closely related to dynamic models. More concretely, a simulation results when the equations of the underlying dynamic model are solved. This model is designed to imitate the time evolution of a real system. To put it another way, a simulation imitates one process by another process. In this definition, the term `process’ refers solely to some object or system whose state changes in time. If the simulation is run on a computer, it is called a computer simulation.''

According to Humphreys, Hartmann's definition of a simulation needs revision, but is basically correct.

In Humphreys' view, simulations need not include evolution over time.

Some modalities can yield low-latency, unreliable information and others high-latency, reliable information.

Combining both can produce fast information which improves over time.

Robinson reports two types of motor neurons in the deep SC: One type has strong activity just (~20 milliseconds) before the onset of a saccade. The other type has gradually increasing activity whose peak is, again, around 12-20 milliseconds before onset.

The response of neurons in the SC to a given stimulus decreases if that stimulus is presented constantly or repeatedly at a relatively slow rate (once every few seconds, up to a minute).

Some neurons in the dSC respond to an auditory stimulus with a single spike at its onset, some with sustained activity over the duration of the stimulus.

Neurons in the deep SC which show an enhancement in response to multisensory stimuli peak earlier.

The response profiles have superadditive, additive, and subadditive phases: Even for cross-sensory stimuli whose unisensory components are strong enough to elicit only an additive enhancement of the cumulated response, the response is superadditive over parts of the time course.

The probability that two stimuli in different modalities are perceived as one multisensory stimulus generally decreases with increasing temporal or spatial disparity between them.

In a sensorimotor synchronization task, Aschersleben and Bertelson found that an auditory distractor biased the temporal perception of a visual target stimulus more strongly than the other way around.

Kushal et al. do not evaluate the accuracy of audio-visual localization quantitatively. They do show a graph for visual-only, audio-visual, and audio-visual and temporal localization during one test run. That graph seems to indicate that multisensory and temporal integration prevent misdetections—they do not seem to improve localization much.

Kushal et al. use an EM algorithm to integrate audio-visual information for active speaker localization statically and over time.

It took Kriszhevsky et al. five to six days to train their network on top-notch hardware available in 2012.

RNNPB learns sequences of inputs unsupervised (self-organized).

Rowland and Stein focus on the temporal dynamics of multisensory integration.

Rowland and Stein's goal is only to generate neural responses like those observed in real SC neurons with realistic biological constraints. The model does not give any explanation of neural responses on the functional level.

The network characteristics of the SC are modeled only very roughly by Rowland and Stein's model.

The model due to Rowland and Stein manages to reproduce the nonlinear time course of neural responses to, and enhancement in magnitude and inverse effectiveness in multisensory integration in the SC.

Since the model does not include spatial properties, it does not reproduce the spatial principle (ie. no depression).

Multisensory stimuli can be integrated within a certain time window; auditory or somatosensory stimuli can be integrated with visual stimuli even though they arrive delayed wrt. visual stimuli.