Jacobians--the absolute determinants of the change of variable function--appear formidable and can be complicated. Nevertheless, they are an essential and unavoidable part of the calculation of a multivariate change of variable. It would seem there's nothing for it but to write down a k+1k+1 by k+1k+1 matrix of derivatives and do the calculation.
There's a better way. It's shown at the end in the "Solution" section. Because the purpose of this post is to introduce statisticians to what may be a new method for many, much of it is devoted to explaining the machinery behind the solution. This is the algebra of differential forms. (Differential forms are the things that one integrates in multiple dimensions.) A detailed, worked example is included to help make this become more familiar.
Background
กว่าหนึ่งศตวรรษที่ผ่านมานักคณิตศาสตร์ได้พัฒนาทฤษฎีพีชคณิตเชิงอนุพันธ์เพื่อทำงานกับ "อนุพันธ์ลำดับสูงกว่า" ที่เกิดขึ้นในเรขาคณิตหลายมิติ ดีเทอร์มิแนนต์เป็นกรณีพิเศษของวัตถุพื้นฐานที่จัดการโดย algebras ดังกล่าวซึ่งโดยทั่วไปแล้วจะสลับรูปแบบหลายเส้น ความสวยงามของสิ่งนี้อยู่ที่การคำนวณที่ง่ายเพียงใด
นี่คือทั้งหมดที่คุณต้องรู้
ค่าคือการแสดงออกของรูปแบบ " D x ฉัน " มันคือการต่อกันของ " d " กับชื่อตัวแปรใด ๆdxid
A one-form is a linear combination of differentials, such as dx1+dx2dx1+dx2 or even x2dx1−exp(x2)dx2x2dx1−exp(x2)dx2. That is, the coefficients are functions of the variables.
Forms can be "multiplied" using a wedge product, written ∧∧. This product is anti-commutative (also called alternating): for any two one-forms ωω and ηη,
ω∧η=−η∧ω.
ω∧η=−η∧ω.
This multiplication is linear and associative: in other words, it works in the familiar fashion. An immediate consequence is that ω∧ω=−ω∧ωω∧ω=−ω∧ω, implying the square of any one-form is always zero. That makes multiplication extremely easy!
For the purposes of manipulating the integrands that appear in probability calculations, an expression like dx1dx2⋯dxk+1dx1dx2⋯dxk+1 can be understood as |dx1∧dx2∧⋯∧dxk+1||dx1∧dx2∧⋯∧dxk+1|.
When y=g(x1,…,xn)y=g(x1,…,xn) is a function, then its differential is given by differentiation:
dy=dg(x1,…,xn)=∂g∂x1(x1,…,xn)dx1+⋯+∂g∂x1(x1,…,xn)dxn.
dy=dg(x1,…,xn)=∂g∂x1(x1,…,xn)dx1+⋯+∂g∂x1(x1,…,xn)dxn.
The connection with Jacobians is this: the Jacobian of a transformation (y1,…,yn)=F(x1,…,xn)=(f1(x1,…,xn),…,fn(x1,…,xn))(y1,…,yn)=F(x1,…,xn)=(f1(x1,…,xn),…,fn(x1,…,xn)) is, up to sign, simply the coefficient of dx1∧⋯∧dxndx1∧⋯∧dxn that appears in computing
dy1∧⋯∧dyn=df1(x1,…,xn)∧⋯∧dfn(x1,…,xn)
dy1∧⋯∧dyn=df1(x1,…,xn)∧⋯∧dfn(x1,…,xn)
after expanding each of the dfidfi as a linear combination of the dxjdxj in rule (5).
Example
The simplicity of this definition of a Jacobian is appealing. Not yet convinced it's worthwhile? Consider the well-known problem of converting two-dimensional integrals from Cartesian coordinates (x,y)(x,y) to polar coordinates (r,θ)(r,θ), where (x,y)=(rcos(θ),rsin(θ))(x,y)=(rcos(θ),rsin(θ)). The following is an utterly mechanical application of the preceding rules, where "(∗)(∗)" is used to abbreviate expressions that will obviously disappear by virtue of rule (3), which implies dr∧dr=dθ∧dθ=0dr∧dr=dθ∧dθ=0.
dxdy=|dx∧dy|=|d(rcos(θ))∧d(rsin(θ))|=|(cos(θ)dr−rsin(θ)dθ)∧(sin(θ)dr+rcos(θ)dθ|=|(∗)dr∧dr+(∗)dθ∧dθ−rsin(θ)dθ∧sin(θ)dr+cos(θ)dr∧rcos(θ)dθ|=|0+0+rsin2(θ)dr∧dθ+rcos2(θ)dr∧dθ|=|r(sin2(θ)+cos2(θ))dr∧dθ)|=r drdθ.
dxdy=|dx∧dy|=|d(rcos(θ))∧d(rsin(θ))|=|(cos(θ)dr−rsin(θ)dθ)∧(sin(θ)dr+rcos(θ)dθ|=|(∗)dr∧dr+(∗)dθ∧dθ−rsin(θ)dθ∧sin(θ)dr+cos(θ)dr∧rcos(θ)dθ|=|0+0+rsin2(θ)dr∧dθ+rcos2(θ)dr∧dθ|=|r(sin2(θ)+cos2(θ))dr∧dθ)|=r drdθ.
The point of this is the ease with which such calculations can be performed, without messing about with matrices, determinants, or other such multi-indicial objects. You just multiply things out, remembering that wedges are anti-commutative. It's easier than what is taught in high school algebra.
Preliminaries
Let's see this differential algebra in action. In this problem, the PDF of the joint distribution of (X1,X2,…,Xk+1)(X1,X2,…,Xk+1) is the product of the individual PDFs (because the XiXi are assumed to be independent). In order to handle the change to the variables YiYi we must be explicit about the differential elements that will be integrated. These form the term dx1dx2⋯dxk+1dx1dx2⋯dxk+1. Including the PDF gives the probability element
fX(x,α)dx1⋯dxk+1∝(xα1−11exp(−x1))⋯(xαk+1−1k+1exp(−xk+1))dx1⋯dxk+1=xα1−11⋯xαk+1−1k+1exp(−(x1+⋯+xk+1))dx1⋯dxk+1.
fX(x,α)dx1⋯dxk+1∝(xα1−11exp(−x1))⋯(xαk+1−1k+1exp(−xk+1))dx1⋯dxk+1=xα1−11⋯xαk+1−1k+1exp(−(x1+⋯+xk+1))dx1⋯dxk+1.
(The normalizing constant has been ignored; it will be recovered at the end.)
Staring at the definitions of the YiYi a few seconds ought to reveal the utility of introducing the new variable
Z=X1+X2+⋯+Xk+1,
Z=X1+X2+⋯+Xk+1,
giving the relationships
Xi=YiZ.
Xi=YiZ.
This suggests making the change of variables xi→yizxi→yiz in the probability element. The intention is to retain the first kk variables y1,…,yky1,…,yk along with zz and then integrate out zz. To do so, we have to re-express all the dxidxi in terms of the new variables. This is the heart of the problem. It's where the differential algebra takes place. To begin with,
dxi=d(yiz)=yidz+zdyi.
dxi=d(yiz)=yidz+zdyi.
Note that since Y1+Y2+⋯+Yk+1=1Y1+Y2+⋯+Yk+1=1, then
0=d(1)=d(y1+y2+⋯+yk+1)=dy1+dy2+⋯+dyk+1.
0=d(1)=d(y1+y2+⋯+yk+1)=dy1+dy2+⋯+dyk+1.
Consider the one-form
ω=dx1+⋯+dxk=z(dy1+⋯+dyk)+(y1+⋯+yk)dz.
ω=dx1+⋯+dxk=z(dy1+⋯+dyk)+(y1+⋯+yk)dz.
It appears in the differential of the last variable:
dxk+1=zdyk+1+yk+1dz=−z(dy1+⋯+dyk)+(1−y1−⋯yk)dz=dz−ω.
dxk+1=zdyk+1+yk+1dz=−z(dy1+⋯+dyk)+(1−y1−⋯yk)dz=dz−ω.
The value of this lies in the observation that
dx1∧⋯∧dxk∧ω=0
dx1∧⋯∧dxk∧ω=0
because, when you expand this product, there is one term containing dx1∧dx1=0dx1∧dx1=0 as a factor, another containing dx2∧dx2=0dx2∧dx2=0, and so on: they all disappear. Consequently,
dx1∧⋯∧dxk∧dxk+1=dx1∧⋯∧dxk∧z−dx1∧⋯∧dxk∧ω=dx1∧⋯∧dxk∧z.
dx1∧⋯∧dxk∧dxk+1=dx1∧⋯∧dxk∧z−dx1∧⋯∧dxk∧ω=dx1∧⋯∧dxk∧z.
Whence (because all products dz∧dzdz∧dz disappear),
dx1∧⋯∧dxk+1=(zdy1+y1dz)∧⋯∧(zdyk+ykdz)∧dz=zkdy1∧⋯∧dyk∧dz.
dx1∧⋯∧dxk+1=(zdy1+y1dz)∧⋯∧(zdyk+ykdz)∧dz=zkdy1∧⋯∧dyk∧dz.
The Jacobian is simply |zk|=zk|zk|=zk, the coefficient of the differential product on the right hand side.
Solution
The transformation (x1,…,xk,xk+1)→(y1,…,yk,z)(x1,…,xk,xk+1)→(y1,…,yk,z) is one-to-one: its inverse is given by xi=yizxi=yiz for 1≤i≤k1≤i≤k and xk+1=z(1−y1−⋯−yk)xk+1=z(1−y1−⋯−yk). Therefore we don't have to fuss any more about the new probability element; it simply is
(zy1)α1−1⋯(zyk)αk−1(z(1−y1−⋯−yk))αk+1−1exp(−z)|zkdy1∧⋯∧dyk∧dz|=(zα1+⋯+αk+1−1exp(−z)dz)(yα1−11⋯yαk−1k(1−y1−⋯−yk)αk+1−1dy1⋯dyk).
(zy1)α1−1⋯(zyk)αk−1(z(1−y1−⋯−yk))αk+1−1exp(−z)|zkdy1∧⋯∧dyk∧dz|=(zα1+⋯+αk+1−1exp(−z)dz)(yα1−11⋯yαk−1k(1−y1−⋯−yk)αk+1−1dy1⋯dyk).
That is manifestly a product of a Gamma(α1+⋯+αk+1)(α1+⋯+αk+1) distribution (for ZZ) and a Dirichlet(α)(α) distribution (for (Y1,…,Yk)(Y1,…,Yk)). In fact, since the original normalizing constant must have been a product of Γ(αi)Γ(αi), we deduce immediately that the new normalizing constant must be divided by Γ(α1+⋯+αk+1)Γ(α1+⋯+αk+1), enabling the PDF to be written
fY(y,α)=Γ(α1+⋯+αk+1)Γ(α1)⋯Γ(αk+1)(yα1−11⋯yαk−1k(1−y1−⋯−yk)αk+1−1).
fY(y,α)=Γ(α1+⋯+αk+1)Γ(α1)⋯Γ(αk+1)(yα1−11⋯yαk−1k(1−y1−⋯−yk)αk+1−1).