Show Tag: probabilities

Select Other Tags

Low probabilities are often mis-estimated by humans; depending on the setting, they can be over- or underestimated.

Chalk et al. hypothesize that biological cognitive agents learn a generative model of sensory input and rewards for actions.

One of the benefits of Soltani and Wang's model is that it does not require their neurons to perform complex computations. By simply counting active synapses, they calculate log probabilities of reward. The learning rule is what makes sure the correct number of neurons are active given the input.

Soltani and Wang only consider percepts and reward. They do not model any generative causes behind the two.

In Chalk et al.'s model, low-level sensory neurons are responsible for calculating the probabilities of high-level hidden variables given certain features being present or not. Other neurons are then responsible for predicting the rewards of different actions depending on the presumed state of those hidden variables.

In Chalk et al.'s model, neurons update their parameters online, ie. during the task. In one condition of their experiments, only neurons predicting reward are updated, in others, perceptual neurons are updated as well. Reward prediction was better when perceptual responses were tuned as well.

Ursino et al. divide models of multisensory integration into three categories:

  1. Bayesian models (optimal integration etc.),
  2. neuron and network models,
  3. models on the semantic level (symbolic models).

Chen et al. presented a system which uses a SOM to cluster states. After learning, the SOM units are extended with a histogram keeping the number of times the unit was BMU and the input belonged to each of a number of known states $$C={c_1,c_2,\dots,c_n}$$.

The system is used in robot soccer. Each class is connected to an action. Actions are chosen by finding the BMU in the net and selecting the action connected to its most likely class.

In an unsupervised, online phase, these histograms are updated in a reinforcement-learning fashion: whenever the action selected lead to success, the bin in the BMU's histogram which was the most likely class is increased. It is decreased otherwise.

There is evidence suggesting that the brain actually does perform statistical processing.

If we want to learn classification using backprop, we cannot force our network to create binary output because binary output is not a smooth function of the input.

Instead we can let our network learn to output the log probability for each class given the input.

Learning in RBMs is competitive but without explicit inhibition (because the RBM is restricted in that it does not have recurrent connections). Neurons learn different things due to random initialization and stochastic processing.

A Deep Belief Network is a multi-layered, feed-forward network in which each successive layer infers about latent variables of the input from the output of its preceding layers.

Yang and Shadlen show that neurons in LIP (in monkeys) encode the log probability of reward given artificial visual stimuli in a wheather prediction task experiment.

Some models view attentional changes of neural responses as the result of Bayesian inference about the world based on changing priors.

Chalk et al. argue that changing the task should not change expectations—change the prior—about the state of the world. Rather, they might change the model of how reward depends on the state of the world.

Deneve describes neurons as integrating probabilities based on single incoming spikes. Spikes are seen as outcomes of Poisson processes and neurons are to infer the hidden value of those processes' parameter(s). She uses the leaky integrate-and-fire neuron as the basis for her model.

Deneve models a changing world; hidden variables may change according to a Marcov chain. Her neural model deals with that. Wow.

Hidden variables in Deneve's model seem to be binary. Differences in synapses (actually, their input) are due to weights describing how `informative' of the hidden variable they are.

Leakiness of neurons in Deneve's model are due to changing world conditions.

Neurons in Deneve's model actually generate Poisson-like output themselves (though deterministically).

The process it generates is described as predictive. A neuron $n_1$ fires if the probability $P_1(t)$ estimated by $n_1$ based on its input is greater than the probability $P_2(t)$ estimated by another neuron $n_2$ based on $n_1$'s input.

In an efficient population code, neural responses are statistically independent.

A representation of probabilities is not necessary for optimal estimation.

A neural population may encode a probability density function if each neuron's response represents the probability (or log probability) of some concrete value of a latent variable.

Early visual neurons (eg. in V1) do not seem to encode probabilities.

I'm not so sure that early visual neurons don't encode probabilities. The question is: which probabilities do they encode? That of a line being there?

Deneve describes how neurons performing Bayesian inference on variables behind Poisson inputs can learn the parameters of the Poisson processes in an online variant of the expectation maximization (EM) algorithm.

According to Barber et al., `the original Hopfield net implements Bayesian inference on analogue quantities in terms of PDFs'.