6.11. A COFACTOR IDENTITY 159

det(A) = 0. Thus, the constant term of det(λ I−A) is 0. Consider εI +A ≡ Aε for smallreal ε . The characteristic polynomial of Aε is

det(λ I−Aε) = det((λ − ε) I−A)

This is of the form

(λ − ε)p +a1 (λ − ε)p−1 + · · ·+(λ − ε)m am

where the a j are the coefficients in the characteristic polynomial for A and ak = 0 fork > m,am ̸= 0. The constant term of this polynomial in λ must be nonzero for all ε smallenough because it is of the form

(−1)mε

mam +(higher order terms in ε) = εm [am (−1)m + εC (ε)]

which is nonzero for all positive but very small ε. Thus εI +A is invertible for all ε smallenough but nonzero. ■

Recall that for A an p× p matrix, cof(A)i j is the determinant of the matrix which resultsfrom deleting the ith row and the jth column and multiplying by (−1)i+ j. In the proof andin what follows, I am using Dg to equal the matrix of the linear transformation Dg takenwith respect to the usual basis on Rp. Thus (Dg)i j = ∂gi/∂x j where g = ∑i giei for the eithe standard basis vectors.

Lemma 6.11.2 Let g : U → Rp be C2 where U is an open subset of Rp. Then

p

∑j=1

cof(Dg)i j, j = 0,

where here (Dg)i j ≡ gi, j ≡ ∂gi∂x j

. Also, cof(Dg)i j =∂ det(Dg)

∂gi, j.

Proof: From the cofactor expansion theorem,

δ k j det(Dg) =p

∑i=1

gi,k cof(Dg)i j (6.17)

This is because if k ̸= j, that on the right is the cofactor expansion of a determinant withtwo equal columns while if k = j, it is just the cofactor expansion of the determinant. Inparticular,

∂ det(Dg)

∂gi, j= cof(Dg)i j (6.18)

which shows the last claim of the lemma. Assume that Dg (x) is invertible to begin with.Differentiate 6.17 with respect to x j and sum on j. This yields

∑r,s, j

δ k j∂ (detDg)

∂gr,sgr,s j = ∑

i jgi,k j (cof(Dg))i j +∑

i jgi,k cof(Dg)i j, j .

Hence, using δ k j = 0 if j ̸= k and 6.18,

∑rs(cof(Dg))rs gr,sk = ∑

rsgr,ks (cof(Dg))rs +∑

i jgi,kcof(Dg)i j, j .