How to interpret differential entropy?

I recently read this article on the entropy of a discrete probability distribution. It describes a nice way of thinking about entropy as the expected number bits (at least when using log2 in your entropy definition) needed to encode a message when your encoding is optimal, given the probability distribution of the words you use.

However, when extending to the continuous case like here I believe this way of thinking breaks down, since xp(x)= for any continuous probability distribution p(x) (please correct me if that is wrong), so I was wondering if there is nice way of thinking about what continuous entropy means, just like with the discrete case.

Answer

There is no interpretation of differential entropy which would be as meaningful or useful as that of entropy. The problem with continuous random variables is that their values typically have 0 probability, and therefore would require an infinite number of bits to encode.

If you look at the limit of discrete entropy by measuring the probability of intervals [nε,(n+1)ε[, you end up with

p(x)log2p(x)dxlog2ε

and not the differential entropy. This quantity is in a sense more meaningful, but will diverge to infinity as we take smaller and smaller intervals. It makes sense, since we’ll need more and more bits to encode in which of the many intervals the value of our random value falls.

A more useful quantity to look at for continuous distributions is the relative entropy (also Kullback-Leibler divergence). For discrete distributions:

DKL[P||Q]=xP(x)log2P(x)Q(x).

It measures the number of extra bits used when the true distribution is P, but we use logQ2(x) bits to encode x. We can take the limit of relative entropy and arrive at

DKL[p∣∣q]=p(x)log2p(x)q(x)dx,

because log2ε will cancel. For continuous distributions this corresponds to the number of extra bits used in the limit of infinitesimally small bins. For both continuous and discrete distributions, this is always non-negative.

Now, we could think of differential entropy as the negative relative entropy between p(x) and an unnormalized density λ(x)=1,

p(x)log2p(x)dx=DKL[p∣∣λ].

Its interpretation would be the difference in the number of bits required by using log2(n+1)εnεp(x)dx bits to encode the n-th interval instead of logε bits. Even though the former would be optimal, this difference can now be negative, because λ is cheating (by not integrating to 1) and therefore might assign fewer bits on average than theoretically possible.

See Sergio Verdu’s talk for a great introduction to relative entropy.

Attribution
Source : Link , Question Author : dippynark , Answer Author : Lucas

Leave a Comment