312 CHAPTER 13. MATRICES AND THE INNER PRODUCT

Lemma 13.3.7 Suppose {x1,x2, · · · ,xr} is an orthonormal set of vectors. Then if c1, · · · ,crare scalars, ∣∣∣∣∣ r

∑k=1

ckxk

∣∣∣∣∣2

=r

∑k=1|ck|2 .

Proof: This follows from the definition. From the properties of the dot product andusing the fact that the given set of vectors is orthonormal,∣∣∣∣∣ r

∑k=1

ckxk

∣∣∣∣∣2

=

(r

∑k=1

ckxk,r

∑j=1

c jx j

)= ∑

k, jckc j (xk,x j) =

r

∑k=1|ck|2 . ■

13.3.1 The Least Squares Regression LineFor the situation of the least squares regression line discussed here I will specialize to thecase of Rn rather than Fn because it seems this case is by far the most interesting and theextra details are not justified by an increase in utility. Thus, everywhere you see A∗ itsuffices to place AT .

An important application of Corollary 13.3.6 is the problem of finding the least squaresregression line in statistics. Suppose you are given points in xy plane

{(xi,yi)}ni=1

and you would like to find constants m and b such that the line y = mx+b goes through allthese points. Of course this will be impossible in general. Therefore, try to find m,b to getas close as possible. The desired system is

y1...

yn

=

x1 1...

...xn 1

(

mb

)≡ A

(mb

)

which is of the form y = Ax and it is desired to choose m and b to make∣∣∣∣∣∣∣∣A(

mb

)−

y1...

yn

∣∣∣∣∣∣∣∣2

as small as possible. According to Theorem 13.3.3 and Corollary 13.3.6, the best valuesfor m and b occur as the solution to

AT A

(mb

)= AT

y1...

yn

 , A =

x1 1...

...xn 1

 .

Thus, computing AT A,(∑

ni=1 x2

i ∑ni=1 xi

∑ni=1 xi n

)(mb

)=

(∑

ni=1 xiyi

∑ni=1 yi

)

312 CHAPTER 13. MATRICES AND THE INNER PRODUCTLemma 13.3.7 Suppose {x1,X2,--+ ,x,} is an orthonormal set of vectors. Then if c1,+++ ,Cyare scalars,2r rY cKXK = y? lcx|*.k=1 k=1Proof: This follows from the definition. From the properties of the dot product andusing the fact that the given set of vectors is orthonormal,2r r r= e CKXk Een) = Y cxej (Xe, X/) = y lcg|*. |k=l j=l k=lkj,Ye cexek=l13.3.1 The Least Squares Regression LineFor the situation of the least squares regression line discussed here I will specialize to thecase of IR” rather than F” because it seems this case is by far the most interesting and theextra details are not justified by an increase in utility. Thus, everywhere you see A* itsuffices to place A’.An important application of Corollary 13.3.6 is the problem of finding the least squaresregression line in statistics. Suppose you are given points in xy plane{(xi,¥i) Hatand you would like to find constants m and b such that the line y = mx + b goes through allthese points. Of course this will be impossible in general. Therefore, try to find m,b to getas close as possible. The desired system isJE JEG)n n 1which is of the form y = Ax and it is desired to choose m and b to make2as small as possible. According to Theorem 13.3.3 and Corollary 13.3.6, the best valuesfor m and b occur as the solution toJiata(™ )aat : |,A=Yn Xn 1XxX] 1Thus, computing A’ A,hax vie Xi m\_ ve xiive xi n b vedi