172 CHAPTER 7. IMPLICIT FUNCTION THEOREM
This shows the first claim of the theorem. The second claim follows from similar reason-ing. Suppose H (x) has a positive eigenvalue λ
2. Then let v be an eigenvector for thiseigenvalue. Then from 7.21,
f (x+tv) = f (x)+12
t2vT H (x)v+12
t2 (vT (H (x+tv)−H (x))v)
which implies
f (x+tv) = f (x)+12
t2λ
2 |v|2 + 12
t2 (vT (H (x+tv)−H (x))v)
≥ f (x)+14
t2λ
2 |v|2
whenever t is small enough. Thus in the direction v the function has a local minimum atx. The assertion about the local maximum in some direction follows similarly. This provesthe theorem. ■
This theorem is an analogue of the second derivative test for higher dimensions. As inone dimension, when there is a zero eigenvalue, it may be impossible to determine from theHessian matrix what the local qualitative behavior of the function is. For example, consider
f1 (x,y) = x4 + y2, f2 (x,y) =−x4 + y2.
Then D fi (0,0) = 0 and for both functions, the Hessian matrix evaluated at (0,0) equals(0 00 2
)but the behavior of the two functions is very different near the origin. The second has asaddle point while the first has a minimum there.
7.6 The Rank TheoremThis is a very interesting result. The proof follows Marsden and Hoffman. First here issome linear algebra.
Theorem 7.6.1 Let L : Rn→ RN have rank m. Then there exists a basis
{u1, · · · ,um,um+1, · · · ,un}
such that a basis for ker(L) is {um+1, · · · ,un} .
Proof: Since L has rank m, there is a basis for L(Rn) which is of the form
{Lu1, · · · ,Lum}
Then if ∑i ciui = 0 you can do L to both sides and conclude that each ci = 0. Hence{u1, · · · ,um} is linearly independent. Let {v1, · · · ,vk} be a basis for ker(L) . Let x ∈ Rn.Then Lx = ∑
mi=1 ciLui for some choice of scalars ci. Hence L(x−∑
mi=1 ciui) = 0 which
shows that there exist d j such that x= ∑mi=1 ciui +∑
kj=1 d jv j It follows that
span(u1, · · · ,um,v1, · · · ,vk) = Rn