Show Tag: probability-density-functions

Select Other Tags

Kullback-Leibler divergence $D_{KL}(P,Q)$ between probability distributions $P$ and $Q$ can be interpreted as the information lost when approximating $P$ by $Q$.

The weights in a trained RBM implicitly encode a PDF over the training set.

Given probability density functions (PDF) $P(X)$ and $P(X\mid M)$ for a latent variable $X$ and an observable $M$, an optimal estimator for $X$ wrt. the loss function $F$ is given by $$ f_{opt} = \mathrm{arg min}_f \int P(x) \int P(x\mid m) L(x,f(m))\;dx\;dm $$

A neural population may encode a probability density function if each neuron's response represents the probability (or log probability) of some concrete value of a latent variable.