Weitere Beispiele werden automatisch zu den Stichwörtern zugeordnet - wir garantieren ihre Korrektheit nicht.
A great many important inequalities in information theory are actually lower bounds for the Kullback-Leibler divergence.
To measure how similar the two distributions are, we use the Kullback-Leibler divergence, :
This fundamental inequality states that the Kullback-Leibler divergence is non-negative.
Or as an expected value of simpler Kullback-Leibler divergences:
Note also that there is a relation between the Kullback-Leibler divergence and the "rate function" in the theory of large deviations.
On the other hand, it seems to be much more difficult to derive useful upper bounds for the Kullback-Leibler divergence.
This step involves minimizing the cross-entropy or Kullback-Leibler divergence.
Here, the idea is to maximize the expected Kullback-Leibler divergence of the posterior distribution relative to the prior.
Note that is the relative entropy of with respect to , also called the Kullback-Leibler divergence.
This corresponds to the Kullback-Leibler divergence or relative entropy.
In information theory and machine learning, information gain is a synonym for Kullback-Leibler divergence.
Kullback-Leibler divergences between the whole posterior distributions of the slope and variance do not indicate non-normality.
Pinsker's inequality relates Kullback-Leibler divergence and total variation distance.
The total variation distance is related to the Kullback-Leibler divergence by Pinsker's inequality.
It is defined as the Kullback-Leibler divergence from the distribution to a reference measure m as follows.
Another inequality concerning the Kullback-Leibler divergence is known as Kullback's inequality.
The difference between the two quantities is the Kullback-Leibler divergence or relative entropy, so the inequality can also be written:
Use of a logarithmic score function for example, leads to the expected utility taking the form of the Kullback-Leibler divergence.
Alternately, the metric can be obtained as the second derivative of the relative entropy or Kullback-Leibler divergence.
The Kullback-Leibler divergence is additive for independent distributions in much the same way as Shannon entropy.
However, the Kullback-Leibler divergence is rather directly related to a metric, specifically, the Fisher information metric.
The Kullback-Leibler divergence is named after Kullback and Richard Leibler.
Kullback-Leibler divergence (information gain)
The total correlation is also the Kullback-Leibler divergence between the actual distribution and its maximum entropy product approximation .
Like mutual information, conditional mutual information can be expressed as a Kullback-Leibler divergence: