Select Other Tags

Cuppini et al. present a model of the SC that exhibits many of the properties regarding neural connectivity, electrophysiology, and development that have been found experimentally in nature.⇒

The model of the SC due to Cuppini et al. reproduces development of

- multi-sensory neurons
- multi-sensory enhancement
- intra-modality depression
- super-additivity
- inverse effectiveness⇒

The model due to Cuppini et al. comprises distinct neural populations for

- anterior ectosylvian sulcus (AES) and auditory subregion of AES (FAES)
- inhibitory interneurons between AES/FAES and SC
- space-coded ascending inputs (visual, auditory) to the SC
- inhibitory ascending interneurons
- (potentially) multi-sensory SC neurons.⇒

The model due to Cuppini et al. does not need neural multiplication to implement superadditivity or inverse effectiveness. Instead, it exploits the sigmoid transfer function in multi-sensory neurons: due to this sigmoid transfer function and due to less-than-unit weights between input and multi-sensory neurons, weak stimuli that fall into the low linear regions of input neurons evoke less than linear responses in multi-sensory neurons. However, the sum of two such stimuli (from different modalities) can be in their linear range and thus the result can be much greater than the sum of the individual responses. ⇒

Through lateral connections, a Hebbian learning rule, and approximate initialization, Cuppini et al. manage to learn register between sensory maps. This can be seen as an implementation of a SOM.⇒

Cuppini et al. use mutually inhibitive, modality-specific inhibition (inhibitory inter-neurons that get input from one modality and inhibit inhibitory interneurons receiving input from different modalities) to implement a winner-take-all mechanism between modalities; this leads to a visual (or auditory) capture effect without functional multi-sensory integration.

Their network model builds upon their earlier single-neuron model.⇒

Not sure about the biological motivation of this. Also: it would be interesting to know if functional integration still occurs.⇒

Cuppini et al. do not evaluate their model's performance (comparability to cat/human performance, optimality...)⇒

The model due to Cuppini et al. is inspired only by observed neurophysiology; it has no normative inspiration.⇒

Ravulakollu et al. loosely use the super colliculus as a metaphor for their robotic visual-auditory localization.⇒

Soltani and Wang propose an adaptive neural model of Bayesian inference neglecting any priors and claim that it is consistent with certain observations in biology.⇒

Soltani and Wang argue that their model is consistent with the 'base rate neglect' fallacy.⇒

Soltani and Wang propose an adaptive model of Bayesian inference **with binary cues**.

In their model, a synaptic weight codes for the **ratio** of synapses in a **set** which are activated vs. de-activated by the binary cue encoded in their pre-synaptic axon's activity.

The stochastic Hebbian learning rule makes the synaptic weights correctly encode log posterior probabilities and the neurons will encode reward probability correctly.⇒

Weisswange et al's model uses a softmax function to normalize the output.⇒

Self-organization occurs in the physical world as well as in information-processing systems. In neural-network-like systems, SOMs are not the only way of self-organization.⇒

Rucci et al.'s plots of ICc activation look very similar to Jorge's IPD matrices.⇒

Deneve et al. propose a recurrent network which is able to fit a template to (Poisson-)noisy input activity, implementing an estimator of the original input. The authors show analytically and in simulations that the network is able to approximate a maximum likelihood estimator. The network’s dynamics are governed by divisive normalization and the neural input tuning curves are hard-wired. ⇒

Soltani and Wang propose a learning algorithm in which neurons predict rewards for actions based on individual cues.
The *winning neuron* stochastically gets reward depending on the action taken.⇒

Soltani and Wang only consider percepts and reward. They do not model any generative causes behind the two.⇒

In Chalk et al.'s model, low-level sensory neurons are responsible for calculating the probabilities of high-level hidden variables given certain features being present or not. Other neurons are then responsible for predicting the rewards of different actions depending on the presumed state of those hidden variables. ⇒

In Chalk et al.'s model, neurons update their parameters online, ie. during the task. In one condition of their experiments, only neurons predicting reward are updated, in others, perceptual neurons are updated as well. Reward prediction was better when perceptual responses were tuned as well.⇒

SOMs and SOM-like algorithms have been used to model natural multi-sensory integration in the SC.⇒

Anastasio and Patton model the deep SC using SOM learning.⇒

Anastasio and Patton present a model of multi-sensory integration in the superior colliculus which takes into account modulation by uni-sensory projections from cortical areas.⇒

In the model due to Anastasio and Patton, deep SC neurons combine cortical input multiplicatively with primary input.⇒

Anastasio and Patton's model is trained in two steps:

First, connections from primary input to deep SC neurons are adapted in a SOM-like fashion.

Then, connections from uni-sensory, parietal inputs are trained, following an anti-Hebbian regime.

The latter phase ensures the principles of *modality-matching* and *cross-modality*.⇒

Magosso et al. present a recurrent ANN model which replicates the ventriloquism effect and the ventriloquism aftereffect.⇒

Competitive learning can be implemented in ANN by strong, constant inhibitory connections between competing neurons.⇒

Simple competitive neural learning with constant inhibitory connections between competing neurons leads to grandmother-type cells.⇒

A network with Hebbian and anti-Hebbian learning can produce a sparse code. Excitatory connections from input to output are learned Hebbian while inhibition between output neurons are learned anti-Hebbian.⇒

Representing an object by only one neuron (a `grandmother cell') makes subsequent processing very easy.⇒

Beck et al. model build-up in the SC as accumulation of evidence from sensory input.⇒

Cuijpers and Erlhagen use neural fields to implement Bayes' rule for combining the activities of neural populations spatially encoding probability distributions.⇒

Beck et al. argue that simply adding time point-to-time point responses of a population code will integrate the information optimally if the noise in the input is what they call *"Poisson-like"*.

That is somewhat expected as in a Poisson distribution with mean $\lambda$ the variance is $\lambda$ and the standard deviation is $\sqrt{\lambda}$ and adding population responses is equivalent to counting spikes over a longer period of time, thus increasing the mean of the distribution.⇒

Many models of Bayesian integration of neural responses rely on hand-crafted connectivity.⇒

The model proposed by Heinrich et al. builds upon the one by Hinoshita et al. It adds visual input and thus shows how learning of language may not only be grounded in perception of verbal utterances, but also in visual perception.⇒

Hinoshita et al. propose a model of natural language acquisition based on a multiple-timescale recurrent artificial neural network (MTRNN).⇒

Lawrence et al. train different kinds of recurrent neural networks to classify sentences in grammatical or agrammatical.⇒

Lawrence manage to train ANNs to learn grammar-like structure without them having any inbuilt representation of grammar They argue that that shows that Chomsky's assumption that humans must have inborn linguistic capabilities is unnecessary.⇒

Hinoshita et al. argue that by watching language learning in RNNs, we can learn about how the human brain might self-organize to learn language.⇒

Single layer perceptrons cannot approximate every continuous function.⇒

Multilayer perceptrons can approximate any continuous function with only a single hidden layer.⇒

It was known before Hornik et al.'s work, that **specific classes** of multilayer feedforward networks could approximate any continuous function.⇒

Hornik et al. showed that multilayer feed-forward networks with **arbitrary** squashing functions can approximate any continuous function with only a single hidden layer with any desired accuracy (on a compact set of input patterns).⇒

If an MLP fails to approximate a certain function, this can be due to

- inadequate learning procedure,
- inadequate number of hidden units (not layers),
- noise.

In principle, a three-layer feedforward network should be capable of approximating any (continuous) function.⇒

Ursino et al. divide models of multisensory integration into three categories:

- Bayesian models (optimal integration etc.),
- neuron and network models,
- models on the semantic level (symbolic models).⇒

Anastasio drop the strong probabilistic interpretation of SC neurons' firing patterns in their learning model.⇒

The first SC model presented by Rowland et al. is a single-neuron model in which sensory and cortical input is simply summed and passed through a sigmoid squashing function.⇒

The sigmoid squashing function used in Rowland et al.'s first model leads to inverse effectiveness: The sum of weak inputs generally falls into the supra-linear part of the sigmoid and thus produces a superadditive response.⇒

The SC model presented by Cuppini et al. has a circular topology to prevent the border effect.⇒

The model of biological computation of ITDs proposed by Jeffress extracts ITDs by means of delay lines and coincidence detecting neurons:

The peaks of the sound pressure at each ear lead, via a semi-mechanical process, to peaks in the activity of certain auditory nerve fibers. Those fibers connect to coincidence-detecting neurons. Different delays in connections from the two ears lead to coincidence for different ITDs, thus making these coincidence-detecting neurons selective for different angles to the sound source. ⇒

Liu et al.'s model of the IC includes a Jeffress-type model of the MSO.⇒

The model of natural multisensory integration and localization is based on the leaky integrate-and-fire neuron model.⇒

Rucci et al. explain audio-visual map registration and learning of orienting responses to audio-visual stimuli by what they call value-dependent learning: After each motor response, a modulatory system evaluated whether that response was good, bringing the target into the center of the visual field of the system, or bad. The learning rule used by the system was such that it strengthened connections between neurons from the different neural subpopulations of the network if they were highly correlated whenever the modulatory response was strong, and weakened otherwise.⇒

Rucci et al.'s system comprises artificial neural populations modeling MSO (aka. the nucleus laminaris), the central nucleus of the inferior colliculus (ICc), the external nucleus of the inferior colliculus (ICx), the retina, and the superior colliculus (SC, aka. the optic tectum). The population modeling the SC is split into a sensory and a motor subpopulation.⇒

In Rucci et al.'s system, the MSO is modeled by computing Fourier transforms for each of the auditory signals. The activity of the MSO neurons is then determined by their individual preferred frequency and ITD and computed directly from the Fourier-transformed data.⇒

In Rucci et al.'s model, neural weights are updated between neural populations modeling

- ICC and ICx
- sensory and motor SC. ⇒

Recent neurophysiological evidence seems to contradict the details of Jeffress' model.⇒

Weber presents a Helmholtz machine extended by adaptive lateral connections between units and a topological interpretation of the network. A Gaussian prior over the population response (a prior favoring co-activation of close-by units) and training with natural images lead to spatial self-organization and feature-selectivity similar to that in cells in early visual cortex.⇒

Weber presents a continuous Hopfield-like RNN as a model of complex cells in V1. This model receives input from a sparse coding generative Helmholtz machine, described earlier as a model of simple cells in V1, and which produces topography by coactivating neighbors in its "sleep phase". The complex cell model with its horizontal connections is trained to predict the simple cells' activations, while input images undergo small random shifts. The trained network features realistic centre-surround weight profiles (in position- and orientation-space) and sharpened orientation tuning curves.⇒

Hunsberger et al. suggest that neural heterogeneity and response stochasticity both decorrelate and linearize population responses and thus improve transmission of information.⇒

Krasne et al. present an ANN model for fear conditioning.⇒

Pure neural modeling does not explain complex behavior.⇒

Much of neural processing can be understood as compression and de-compression.⇒

Verschure says neurons don't seem to multiply. Gabbiani et al. say they might.⇒

Ma, Beck, Latham and Pouget argue that optimal integration of population-coded probabilistic information can be achieved by simply adding the activities of neurons with identical receptive fields. The preconditions for this to hold are

- independent Poisson (or other "Poisson-like") noise in the input
- identically-shaped tuning curves in input neurons
- a point-to-point connection from neurons in different populations with identical receptive fields to the same output neuron.⇒

It's hard to unambiguously interpret Ma et al.'s paper, but it seems that, according to Renart and van Rossum, any other non-flat profile would also transmit the information optimally, although the decoding scheme would maybe have to be different.⇒

Renart and van Rossum discuss optimal connection weight profiles between layers in a feed-forward neural network. They come to the conclusion that, if neurons in the input population have broad tuning curves, then Mexican-hat-like connectivity profiles are optimal.⇒

Renart and van Rossum state that any non-flat connectivity profile between input and output layers in a feed-forward network yields optimal transmission if there is no noise in the output.⇒

The model due to Ma et al. is simple and it requires no learning.⇒

Mixing Hebbian (unsupervised) learning with feedback can guide the unsupervised learning process in learning *interesting*, or task-relevant things.⇒

Weisswange et al. model learning of multisensory integration using reward-mediated / reward-dependent learning in an ANN, a form of reinforcement learning.

They model a situation similar to the experiments due to Neil et al. and Körding et al. in which a learner is presented with visual, auditory, or audio-visual stimuli.

In each trial, the learner is given reward depending on the accuracy of its response.

In an experiment where stimuli could be caused by the same or different sources, Weisswange found that their model behaves similar to both model averaging or model selection, although slightly more similar to the former.⇒

Fujita presents a supervised ANN model for learning to either generate a continuous time series from an input signal, or to generate a continuous function of the continuous integral of a time series.⇒

Fujita models saccade suppression of endpoint variability by the cerebellum using their supervised ANN model for learning a continuous function of the integral of an input time series.

He assumes that the input activity originates from the SC and that the correction signal is supplied by sensory feedback.⇒

De Kamps and van der Velde introduce a neural blackboard architecture for representing sentence structure.⇒

Deco and Rolls introduce a system that uses a trace learning rule to learn recognition of more and more complex visual features in successive layers of a neural architecture. In each layer, the specificity of the features increases together with the receptive fields of neurons until the receptive fields span most of the visual range and the features actually code for objects. This model thus is a model of the development of object-based attention.⇒

The leaky-integrate-and-fire model due to Rowland and Stein models a single multisensory SC neuron receiving input from a number of sensory, cortical, and sub-cortical sources.

Each of the sources is modeled as a single input to the SC neuron.

Local inhibitory interaction between neurons in multi-sensory trials is modeled by a single time-variant subtractive term which sets in shortly after the actual sensory input, thus not influencing the first phase of the response after stimulus onset.⇒

The network characteristics of the SC are modeled only very roughly by Rowland and Stein's model.⇒

SOMs learn latent-variable models.⇒

If we know which kind of output we want to have and if each neuron's output is a smooth function of its input, then the change in weights to get the right output from the input can be computed using calculus.

Following this strategy, we get backpropagation⇒

ANN implementing DBN have been around for a long time (they go back at least to Fukushima's Neocognitron).⇒

Divisive normalization models have explained how attention can facilitate or suppress some neurons' responses.⇒

Patton and Anastasio present a model of "enhancement and modality-specific suppression in multi-sensory neurons" that requires no multiplicative interaction. It is a follow-up of their earlier functional model of these neurons which requires complex computation.⇒

Anastasio et al. present a model of the response properties of multi-sensory SC neurons which explains enhancement, depression, and super-addititvity using Bayes' rule: If one assumes that a neuron integrates its input to infer the posterior probability of a stimulus source being present in its receptive field, then these effects arise naturally.⇒

Anastasio et al.'s model of SC neurons assumes that these neurons receive multiple inputs with Poisson noise and apply Bayes' rule to calculate the posterior probability of a stimulus being in their receptive fields.⇒

Anastasio et al. point out that, given their model of SC neurons computing the probability of a stimulus being in their RF with Poisson-noised input, a sigmoid response function arises for uni-sensory input.⇒

Beck et al. argue that sub-optimal computations in biological and artificial neural networks can amplify behavioral and perceptual variability caused by internal and external noise. ⇒

Beck et al. argue that sub-optimal computations are a greater cause of behavioral and perceptual variability than internal noise.⇒

Optimal operations are often not feasible for complex tasks for two reasons:

- the generative models necessary to do optimal estimation are too complex and require a lot of knowledge to create
- applying these models is much too computationally intensive⇒

Jazayeri and Movshon present an ANN model for computing likelihood functions ($\approx$ probability density functions with uniform priors) from input population responses with arbitrary tuning functions.

Their assumptions are

- restricted types of noise characteristics (eg. Poisson noise)
- statistically independent noise

Since they work with log likelihoods, they can circumvent the problem of requiring neural multiplication.⇒

Multiplying probabilities is equivalent to adding their logs. Thus, working with log likelihoods, one can circumvent the necessity of neural multiplication when combining probabilities.

Multisensory integration, however, has been viewed as integration of information in exactly that sense, and it is well known that multisensory neurons respond super-additively to stimuli from different modalities.⇒

In Jazayeri and Movshon's model decoding (or output) neurons calculate the logarithm of the input neurons' tuning functions.

This is not biologically plausible because that would give them transfer functions which are non-linear and non-sigmoid (and typically biologically plausible transfer functions said to be sigmoid). ⇒

The ANN model of multi-sensory integration in the SC due to Ohshiro et al. manages to replicate a number of physiological finding about the SC:

- inverse effectiveness,
- long-range inhibition and
- short-range activation,
- multisensory integration,
- different tuning to modalities between neurons,
- weighting of stimuli from different modalities.

It does not learn and it has no probabilistic motivation.⇒

The ANN model of multi-sensory integration in the SC due to Ohshiro et al. uses divisive normalization to model multisensory integration in the SC.⇒

Rowland et al. derive a model of cortico-collicular multi-sensory integration from findings concerning the influence of deactivation or ablesion of cortical regions anterior ectosylvian cortex (AES) and rostral lateral suprasylvian cortex. ⇒

Rowland et al. derive a model of cortico-collicular multi-sensory integration from findings concerning the influence of deactivation or ablesion of cortical regions anterior ectosylvian cortex (AES) and rostral lateral suprasylvian cortex.

It is a single-neuron model. ⇒

Trappenberg presents a competitive spiking neural network for generating motor output of the SC.⇒

We do not know the types of functions computable by neurons.⇒

A neural population may encode a probability density function if each neuron's response represents the probability (or log probability) of some concrete value of a latent variable.⇒

According to Spratling's model, saliency arises from unexpected features in a scene.⇒

The traditional reservoir computing architecture consists of input, reservoir, and output population.

The input layer is feed-forward connected to the reservoir.

The reservoir neurons are connected to each other and to the neurons in the output layer.

Only the connections from the reservoir to output neurons are learned; the others are randomly initialized and fixed.⇒

The number of reservoir nodes in reservoir computing is typically much larger than the number of input or output neurons.

A reservoir network therefore first translates the low-dimensional input into a high-dimensional space and back into a low-dimensional space.⇒

The transfer functions of reservoir nodes in reservoir computing is usually non-linear. Therefore, the transfer from low-to high-dimensional space is non-linear, and linearly inseparable representations in the input layer can be transferred into linearly separable representations in the reservoir layer. Training of the linear, non-recurrent output layer is therefore enough even for problems which could not be solved with a single-layer perceptron on its own.⇒

Reservoir networks exhibit fading memory.⇒

A good reservoir network shows very different behavior for semantically different input, and similar behavior for semantically similar input.

Basically, that's what we want of every network, though.⇒

Appeltant et al. replace the large reservoir population by a simple delay system consisting of just one node and a delay loop.

Instead of feeding the input into the system all at once through parallel input connections, it is time-multiplexed such that the reaction to parts of the input is already traveling through the delay loop when other parts enter the system.⇒

Yamashita et al. modify Deneve et al.'s network by weakening divisive normalization and lateral inhibition. Thus, their network integrates localization if the disparity between localizations in simulated modalities is low, and maintains multiple hills of activation if disparity is high, thus accounting for the ventriloquism effect.⇒

Yamashita et al. argue that, since whether or not two stimuli in different modalities with a certain disparity are integrated depends on the weight profiles in their network, a Bayesian prior is somehow encoded in these weights.⇒

The model due to Cuppini et al. develops low-level multisensory integration (spatial principle) such that integration happens only with higher-level input.

In their model, Hebbian learning leads to sharpening of receptive fields, overlap of receptive fields, and Integration through higher-cognitive input.⇒

Through simulations of neurons (and neuron ensembles), numbers of neurons can be monitored over time scales which both are not possible *in vivo*.⇒

This is mainly an argument in favor of computational neuroscience. It is not so valid for ANN in classical AI where neuronal models are quite detached from biological neurons.⇒

According to Rucci et al., neuroscientists can use robots to quantitatively test and analyze their theories.⇒

The degree to which neuroscientists can draw conclusions from computational models depends on biological accuracy.⇒

If input to biologically plausible models is too dissimilar to natural input, then that can lead to non-natural behavior of the model.⇒

Sensory noise in robotic experiments validates a model's robustness. It is always realistic (but not necessarily natural).⇒

Rucci et al. model multi-sensory integration in the barn owl OT using leaky integrator firing-rate neurons and reinforcement learning.⇒

Inhibitory connections can help stability in a network: they can broaden the range of connection parameters in which the network exhibits differentiated behavior.⇒

The SOM has ancestors in von der Malsburg's "Self-Organization of Orientation Sensitive Cells in the Striate Cortex" and other early models of self-organization⇒

The SOM is an abstraction of biologically-plausible ANN.⇒

The SOM is an asymptotically optimal vector quantizer.⇒

There is no cost function that the SOM algorithm follows exactly.⇒

Quality of order in SOMs is a difficult issue because there is no unique definition of `order' in for the $n$-dimensional case if $n>2$.

Nevertheless, there have been a number of attempts.⇒

There have been many extensions of the original SOM ANN, like

- (Growing) Neural Gas
- adaptive subspace SOM (ASSOM)
- Parameterized SOM (PSOM)
- Stochastic SOM
- recursive and recurrent SOMs⇒

Recursive and Recurrent SOMs have been used for mapping temporal data.⇒

Von der Malsburg introduces a simple model of self-organization which explains the organization of direction-sensitive cells in the human visual cortex.⇒

Deneve et al.'s model (2001) does not compute a population code; it mainly recovers a clean population code from a noisy one.⇒

Hebbian learning and in particular SOM-like algorithms have been used to model cross-sensory spatial register (eg. in the SC).⇒

Bauer et al. present a SOM variant which learns the variance of different sensory modalities (assuming Gaussian noise) to model multi-sensory integration in the SC.⇒

Bauer and Wermter present an ANN algorithm which takes from the self-organizing map (SOM) algorithm the ability to learn a latent variable model from its input. They extend the SOM algorithm so it learns about the distribution of noise in the input and computes probability density functions over the latent variables. The algorithm represents these probability density functions using population codes. This is done with very few assumptions about the distribution of noise. ⇒

Bauer and Wermter use the algorithm they proposed to model multi-sensory integration in the SC. They show that it can learn to near-optimally integrate noisy multi-sensory information and reproduces spatial register of sensory maps, the spatial principle, the principle of inverse effectiveness, and near-optimal audio-visual integration in object localization. ⇒