and so forth. Thus they can each be considered as a linear transformation with values in some
vector space. When you consider the vector spaces, you see that these can also be considered as
multilinear functions on X with values in Y . Now consider the product of two linear transformations
A

(y)

B

(y)

w, where everything is given to make sense and here w is an appropriate vector. Then if
each of these linear transformations can be differentiated, you would do the following simple
computation.

(A (y+ u) B(y + u)− A (y )B (y))(w )

= (A (y+ u)B (y + u)− A (y )B(y + u)+ A (y)B (y + u)− A (y)B (y ))(w )

= ((DA (y)u + o(u))B (y + u)+ A (y)(DB (y)u + o(u))) (w )

= (DA (y)(u)B (y+ u )+ A(y)DB (y)(u)+ o(u))(w)
= (DA (y)(u)B (y)+ A (y)DB (y)(u)+ o (u ))(w )

Then

u → (DA (y )(u )B(y) +A (y)DB (y )(u ))(w )

is clearly linear and

(u,w ) → (DA (y )(u )B(y) +A (y)DB (y )(u ))(w )

is bilinear and continuous as a function of y. By this we mean that for a fixed choice of

(u,w )

the
resulting Y valued function just described is continuous. Now if each of AB,DA,DB can be
differentiated, you could replace y with y + û and do a similar computation to obtain as many
differentiations as desired, the k^{th} differentiation yielding a k linear function. You can do this
as long as A and B have derivatives. Now in the case of the implicit function theorem, you
have

By Lemma 18.1.4 and the implicit function theorem and the chain rule, this is the situation just discussed.
Thus D^{2}x

(y)

can be obtained. Then the formula for it will only involve Dx which is known to be
continuous. Thus one can continue in this way finding derivatives till f fails to have them. The inverse map
never creates difficulties because it is differentiable of order m for any m thanks to Lemma 18.1.4. Thus one
can conclude the following corollary.

Corollary 18.2.1In the implicit and inverse function theorems, you can replace C^{1}with C^{k}inthe statements ofthe theorems forany k ∈ ℕ.