Show Tag: divisive-normalization

Select Other Tags

Deneve et al. propose a recurrent network which is able to fit a template to (Poisson-)noisy input activity, implementing an estimator of the original input. The authors show analytically and in simulations that the network is able to approximate a maximum likelihood estimator. The network’s dynamics are governed by divisive normalization and the neural input tuning curves are hard-wired.

The fact that no long-range inhibitory/short-range excitatory connection pattern were found in in-vitro study of the rat intermediate SC by Lee might also pose a problem for divisive-normalization as a modeling assumption for the SC.

Fetsch et al. explain the discrepancy between observed neurophysiology—superadditivity—and the normative solution to single-neuron cue integration proposed by Ma et al. using divisive normalization:

They propose that the network activity is normalized in order to keep neurons' activities within their dynamic range. This would lead to the apparent reliability-dependent weighting of responses found by Morgan et al. and superadditivity as described by Stanford et al.

Another canonical neural computation proposed by Carandini and Heeger is (divisive) normalization.

Divisive normalization models describe neural responses well in cases of

  • olfactory perception in drosophila,
  • visual processing in retina and V1,
  • possibly in other cortical areas,
  • modulation of responses through attention in visual cortex.

Divisive normalization models describe neural responses well in a number of instances of sensory processing.

Divisive normalization is probably implemented through (GABA-ergic) inhibition in some cases (fruitfly olfactory system). In others (V1), it seems to be implemented by different means.

Divisive normalization models have explained how attention can facilitate or suppress some neurons' responses.

The ANN model of multi-sensory integration in the SC due to Ohshiro et al. uses divisive normalization to model multisensory integration in the SC.

Yamashita et al. modify Deneve et al.'s network by weakening divisive normalization and lateral inhibition. Thus, their network integrates localization if the disparity between localizations in simulated modalities is low, and maintains multiple hills of activation if disparity is high, thus accounting for the ventriloquism effect.

Deneve et al.'s model uses divisive normalization.

Yamashita et al. argue that, since whether or not two stimuli in different modalities with a certain disparity are integrated depends on the weight profiles in their network, a Bayesian prior is somehow encoded in these weights.

Lee et al. found that de-activation of SC motor neurons did not always lead to hypometric saccades. Instead, saccades where generally too far from the preferred direction of the de-activated neurons. They counted this as supporting the vector averaging hypothesis.