ทำไมเอนโทรปีถึงใหญ่ที่สุดเมื่อการกระจายความน่าจะเป็นแบบเดียวกัน?


32

ฉันรู้ว่าเอนโทรปีคือการวัดแบบแผนของกระบวนการ / ตัวแปรและสามารถกำหนดได้ดังนี้ สำหรับตัวแปรสุ่มX Xชุด: - H ( X ) = Σ x ฉัน - P ( x ฉัน ) เข้าสู่ระบบ( P ( x ฉัน ) ) ในหนังสือเกี่ยวกับเอนโทรปีและทฤษฎีข้อมูลโดยแมคเคย์เขาได้ให้ถ้อยแถลงนี้ใน Ch2AH(X)=xiAp(xi)log(p(xi))

เอนโทรปีจะถูกขยายให้มากที่สุดถ้า p เป็นชุด

ฉันสามารถเข้าใจได้เช่นถ้าดาต้าพอยน์ทั้งหมดในชุดAAถูกเลือกด้วยความน่าจะเป็น1 / m1/m ( mmเป็นความสำคัญของเซตAA ) จากนั้นการสุ่มหรือเอนโทรปีจะเพิ่มขึ้น แต่ถ้าเรารู้ว่าบางจุดในเซตAAจะเกิดขึ้นโดยมีความน่าจะเป็นมากกว่าคนอื่น ๆ (พูดในกรณีของการแจกแจงแบบปกติที่ความเข้มข้นสูงสุดของจุดข้อมูลอยู่รอบค่าเฉลี่ยและพื้นที่เบี่ยงเบนมาตรฐานขนาดเล็กรอบมัน หรือเอนโทรปีควรลดลง

But is there any mathematical proof for this ? Like the equation for H(X)H(X) I differentiate it with respect to p(x)p(x) and set it to 0 or something like that.

On a side note, is there any connnection between the entropy that occurs information theory and the entropy calculations in chemistry (thermodynamics) ?


2
This question is answered (in passing) at stats.stackexchange.com/a/49174/919.
whuber

I am getting quite confused with another statement given in Christopher Bishops book which states that "for a single real variable, the distribution that maximizes the entropy is the Gaussian." It also states that "multivariate distribution with max- imum entropy, for a given covariance, is a Gaussian". How is this statement valid? Isnt the entropy of the uniform distribution the maximum always?
user76170

6
Maximization is always performed subject to constraints on the possible solution. When the constraints are that all probability must vanish beyond predefined limits, the maximum entropy solution is uniform. When instead the constraints are that the expectation and variance must equal predefined values, the ME solution is Gaussian. The statements you quote must have been made within particular contexts where these constraints were stated or at least implicitly understood.
whuber

2
I probably also should mention that the word "entropy" means something different in the Gaussian setting than it does in the original question here, for then we are discussing entropy of continuous distributions. This "differential entropy" is a different animal than the entropy of discrete distributions. The chief difference is that the differential entropy is not invariant under a change of variables.
whuber

So which means that maximisation always is with respect to constraints ? What if there are no constraints ? I mean, cant there be a question like this ? Which probability distribution has maximum entropy ?
user76170

คำตอบ:


25

Heuristically, the probability density function on {x1,x2,..,.xn}{x1,x2,..,.xn} with maximum entropy turns out to be the one that corresponds to the least amount of knowledge of {x1,x2,..,.xn}{x1,x2,..,.xn}, in other words the Uniform distribution.

Now, for a more formal proof consider the following:

A probability density function on {x1,x2,..,.xn}{x1,x2,..,.xn} is a set of nonnegative real numbers p1,...,pnp1,...,pn that add up to 1. Entropy is a continuous function of the nn-tuples (p1,...,pn)(p1,...,pn), and these points lie in a compact subset of RnRn, so there is an nn-tuple where entropy is maximized. We want to show this occurs at (1/n,...,1/n)(1/n,...,1/n) and nowhere else.

Suppose the pjpj are not all equal, say p1<p2p1<p2. (Clearly n1n1.) We will find a new probability density with higher entropy. It then follows, since entropy is maximized at some nn-tuple, that entropy is uniquely maximized at the nn-tuple with pi=1/npi=1/n for all ii.

Since p1<p2p1<p2, for small positive εε we have p1+ε<p2εp1+ε<p2ε. The entropy of {p1+ε,p2ε,p3,...,pn}{p1+ε,p2ε,p3,...,pn} minus the entropy of {p1,p2,p3,...,pn}{p1,p2,p3,...,pn} equals

p1log(p1+εp1)εlog(p1+ε)p2log(p2εp2)+εlog(p2ε)

p1log(p1+εp1)εlog(p1+ε)p2log(p2εp2)+εlog(p2ε)
To complete the proof, we want to show this is positive for small enough εε. Rewrite the above equation as p1log(1+εp1)ε(logp1+log(1+εp1))p2log(1εp2)+ε(logp2+log(1εp2))
p1log(1+εp1)ε(logp1+log(1+εp1))p2log(1εp2)+ε(logp2+log(1εp2))

Recalling that log(1+x)=x+O(x2)log(1+x)=x+O(x2) for small xx, the above equation is εεlogp1+ε+εlogp2+O(ε2)=εlog(p2/p1)+O(ε2)

εεlogp1+ε+εlogp2+O(ε2)=εlog(p2/p1)+O(ε2)
which is positive when εε is small enough since p1<p2p1<p2.

A less rigorous proof is the following:

Consider first the following Lemma:

Let p(x)p(x) and q(x)q(x) be continuous probability density functions on an interval II in the real numbers, with p0p0 and q>0q>0 on II. We have IplogpdxIplogqdx

IplogpdxIplogqdx
if both integrals exist. Moreover, there is equality if and only if p(x)=q(x)p(x)=q(x) for all xx.

Now, let pp be any probability density function on {x1,...,xn}{x1,...,xn}, with pi=p(xi)pi=p(xi). Letting qi=1/nqi=1/n for all ii, ni=1pilogqi=ni=1pilogn=logn

i=1npilogqi=i=1npilogn=logn
which is the entropy of qq. Therefore our Lemma says h(p)h(q)h(p)h(q), with equality if and only if pp is uniform.

Also, wikipedia has a brief discussion on this as well: wiki


11
I admire the effort to present an elementary (Calculus-free) proof. A rigorous one-line demonstration is available via the weighted AM-GM inequality by noting that exp(H)exp(H) = (1pi)pipi1pi=n(1pi)pipi1pi=n with equality holding iff all the 1/pi1/pi are equal, QED.
whuber

I don't understand how lognlogn can be equal to lognlogn.
user1603472

4
@user1603472 do you mean ni=1pilogn=logni=1npilogn=logn? Its because ni=1pilogn=lognni=1pi=logn×1i=1npilogn=logni=1npi=logn×1
HBeel

@Roland I pulled the lognlogn outside of the sum since it does not depend on ii. Then the sum is equal to 11 because p1,,pnp1,,pn are the densities of a probability mass function.
HBeel

Same explanation with more details can be found here: math.uconn.edu/~kconrad/blurbs/analysis/entropypost.pdf
Roland

14

Entropy in physics and information theory are not unrelated. They're more different than the name suggests, yet there's clearly a link between. The purpose of entropy metric is to measure the amount of information. See my answer with graphs here to show how entropy changes from uniform distribution to a humped one.

The reason why entropy is maximized for a uniform distribution is because it was designed so! Yes, we're constructing a measure for the lack of information so we want to assign its highest value to the least informative distribution.

Example. I asked you "Dude, where's my car?" Your answer is "it's somewhere in USA between Atlantic and Pacific Oceans." This is an example of the uniform distribution. My car could be anywhere in USA. I didn't get much information from this answer.

However, if you told me "I saw your car one hour ago on Route 66 heading from Washington, DC" - this is not a uniform distribution anymore. The car's more likely to be in 60 miles distance from DC, than anywhere near Los Angeles. There's clearly more information here.

Hence, our measure must have high entropy for the first answer and lower one for the second. The uniform must be least informative distribution, it's basically "I've no idea" answer.


7

The mathematical argument is based on Jensen inequality for concave functions. That is, if f(x)f(x) is a concave function on [a,b][a,b] and y1,yny1,yn are points in [a,b][a,b], then: nf(y1+ynn)f(y1)++f(yn)nf(y1+ynn)f(y1)++f(yn)

Apply this for the concave function f(x)=xlog(x)f(x)=xlog(x) and Jensen inequality for yi=p(xi)yi=p(xi) and you have the proof. Note that p(xi)p(xi) define a discrete probability distribution, so their sum is 1. What you get is log(n)ni=1p(xi)log(p(xi))log(n)ni=1p(xi)log(p(xi)), with equality for the uniform distribution.


1
I actually find the Jensen's inequality proof to be a much deeper proof conceptually than the AM-GM one.
Casebash

4

On a side note, is there any connnection between the entropy that occurs information theory and the entropy calculations in chemistry (thermodynamics) ?

Yes, there is! You can see the work of Jaynes and many others following his work (such as here and here, for instance).

But the main idea is that statistical mechanics (and other fields in science, also) can be viewed as the inference we do about the world.

As a further reading I'd recommend Ariel Caticha's book on this topic.


1

An intuitive explanation:

If we put more probability mass into one event of a random variable, we will have to take away some from other events. The one will have less information content and more weight, the others more information content and less weight. Therefore the entropy being the expected information content will go down since the event with lower information content will be weighted more.

As an extreme case imagine one event getting probability of almost one, therefore the other events will have a combined probability of almost zero and the entropy will be very low.


0

Main idea: take partial derivative of each pipi, set them all to zero, solve the system of linear equations.

Take a finite number of pipi where i=1,...,ni=1,...,n for an example. Denote q=1n1i=0piq=1n1i=0pi.

H=n1i=0pilogpi(1q)logqHln2=n1i=0pilnpi(1q)lnq

HHln2=i=0n1pilogpi(1q)logq=i=0n1pilnpi(1q)lnq
Hpi=lnqpi=0
Hpi=lnqpi=0
Then q=piq=pi for every ii, i.e., p1=p2=...=pn.


I am glad you pointed out this is the "main idea," because it's only a part of the analysis. The other part--which might not be intuitive and actually is a little trickier--is to verify this is a global minimum by studying the behavior of the entropy as one or more of the pi shrinks to zero.
whuber
โดยการใช้ไซต์ของเรา หมายความว่าคุณได้อ่านและทำความเข้าใจนโยบายคุกกี้และนโยบายความเป็นส่วนตัวของเราแล้ว
Licensed under cc by-sa 3.0 with attribution required.