Show Tag: probability

Select Other Tags

LIP seems to encode decision variables for saccade direction.

Soltani and Wang argue that their model is consistent with the 'base rate neglect' fallacy.

The base rate fallacy is a fallacy occuring in human decision making in which humans estimate a posterior probability without properly taking account of the prior probability (i.e. solely on the basis of the likelihood).

Many visual person detection methods use one feature to detect people, create a histogram for the strength of that feature across the image. They then compute a likelihood for a pixel or region by assuming a Gaussian distribution of distances of pixels or histograms belonging to a face. This distribution has been validated in practise (for certain cases).

The Kalman filter assumes linear dynamics (state update) and Gaussian noise.

The extended Kalman filter results from local linearlization of update dynamics.

Particle filters are a numeric Monte-Carlo solution to recursive Bayesian filtering which address problems with non-Gaussian posteriors.

The activity of an SC neuron is proportional to the probability of the endpoint of a saccade being in that neuron's receptive field.

Probabilistic value estimations (by humans) are subject to framing issues: how valuable a choice is depends on how the circumstances are presented (frames).

Probabilistic value estimations are not linear in expected value.

The value function for uncertain gains seems to be generally concave, that of uncertain losses seems to be convex.

Low probabilities are often mis-estimated by humans; depending on the setting, they can be over- or underestimated.

in a cue combination task with correlated errors, some subjects combined cues according to a linear cue combination rule which would have been appropriate for uncorrelated tasks, some combined them suboptimally altogether, and some combined them correctly as according to a linear cue combination rule for correlated tasks.

A deep SC neuron which receives enough information from one modality to reliably determine whether a stimulus is in its receptive field does not improve its performance much by integrating information from another modality.

Patton et al. use this insight to explain the diversity of uni-sensory and multisensory neurons in the deep SC.

There is evidence suggesting that the brain actually does perform statistical processing.

Ma, Beck, Latham and Pouget argue that optimal integration of population-coded probabilistic information can be achieved by simply adding the activities of neurons with identical receptive fields. The preconditions for this to hold are

  • independent Poisson (or other "Poisson-like") noise in the input
  • identically-shaped tuning curves in input neurons
  • a point-to-point connection from neurons in different populations with identical receptive fields to the same output neuron.

It's hard to unambiguously interpret Ma et al.'s paper, but it seems that, according to Renart and van Rossum, any other non-flat profile would also transmit the information optimally, although the decoding scheme would maybe have to be different.

Renart and van Rossum discuss optimal connection weight profiles between layers in a feed-forward neural network. They come to the conclusion that, if neurons in the input population have broad tuning curves, then Mexican-hat-like connectivity profiles are optimal.

Renart and van Rossum state that any non-flat connectivity profile between input and output layers in a feed-forward network yields optimal transmission if there is no noise in the output.

The model due to Ma et al. is simple and it requires no learning.

Colonius and Diederich argue that deep-SC neurons spiking behavior can be interpreted as a vote for a target rather than a non-target being in their receptive field.

This is similar to Anastasio et al.'s previous approach.

There are a number of problems with Colonius' and Diederich's idea that deep-SC neurons' binary spiking behavior can be interpreted as a vote for a target rather than a non-target being in their RF. First, these neurons' RFs can be very broad, and the strength of their response is a function of how far away the stimulus is from the center of their RFs. Second, the response strength is also a function of stimulus strength. It needs some arguing, but to me it seems more likely that the response encodes the probability of a stimulus being in the center of the RF.

Colonius and Diederich argue that, given their Bayesian, normative model of neurons' response behavior, neurons responding to only one sensory modality outperform neurons responding to multiple sensory modalities.

Colonius' and Diederich's explanation for uni-sensory neurons in the deep SC has a few weaknesses: First, they model the input spiking activity for both the target and the non-target case as Poisson distributed. This is a problem, because the input spiking activity is really a function of the target distance from the center of the RF. Second, they explicitly model the probability of the visibility of a target to be independent of the probability of its audibility.

If SC neurons spiking behavior can be interpreted as a vote for a target rather than a non-target being in their receptive field, then the decisions must be made somewhere else because they then do not take into account utility.

Probability matching is a sub-optimal decision strategy, statically, but it can have advantages because it leads to exploration.

Generative Topographic Mapping produces PDFs for latent variables given data points.

Kullback-Leibler divergence $D_{KL}(P,Q)$ between probability distributions $P$ and $Q$ can be interpreted as the information lost when approximating $P$ by $Q$.

For discrete probability distributions $P,E$ with the set of outcomes $E$, Kullback-Leibler divergence is defined as $$D_{KL}(P,Q)=\sum_{e\in E} P(e)\log\left(\frac{P(e)}{Q(e)}\right).$$

MLE provides an optimal method of reading population codes.

It's hard to implement MLE on population codes using neural networks.

Depending on the application, tuning curves, and noise properties, threshold linear networks calculating population vectors can have similar performance as MLE.

Translating a population code into just one value (or vector) discards all information about uncertainty.

Jazayeri and Movshon present an ANN model for computing likelihood functions ($\approx$ probability density functions with uniform priors) from input population responses with arbitrary tuning functions.

Their assumptions are

  • restricted types of noise characteristics (eg. Poisson noise)
  • statistically independent noise

Since they work with log likelihoods, they can circumvent the problem of requiring neural multiplication.

Multiplying probabilities is equivalent to adding their logs. Thus, working with log likelihoods, one can circumvent the necessity of neural multiplication when combining probabilities.

Multisensory integration, however, has been viewed as integration of information in exactly that sense, and it is well known that multisensory neurons respond super-additively to stimuli from different modalities.

In Jazayeri and Movshon's model decoding (or output) neurons calculate the logarithm of the input neurons' tuning functions.

This is not biologically plausible because that would give them transfer functions which are non-linear and non-sigmoid (and typically biologically plausible transfer functions said to be sigmoid).

There seems to be a linear relationship between the mean and variance of neural responses in cortex. This is similar to a Poisson distribution where the variance equals the mean, however, the linearity constant does not seem to be one in biology.

Seung and Sempolinsky introduce maximum likelihood estimation (MLE) as one possible mechanism for neural read-out. However, they state that it is not clear whether MLE can be implemented in a biologically plausible way.

Seung and Sempolinsky show that, in a population code with wide tuning curves and poisson noise and under the conditions described in their paper, the response of neurons near threshold carries exceptionally high information.

Neural populations can compute and encode probability density functions for external variables.