- Find the best solution to the system
- Find an orthonormal basis for ℝ
^{3},given that w_{1}is a multiple of the vector. - Suppose A = A
^{T}is a symmetric real n × n matrix which has all positive eigenvalues. DefineShow this is an inner product on ℝ

^{n}. What does the Cauchy Schwarz inequality say in this case? - Let
_{∞}≡ max. Show this is a norm on ℂ^{n}. HereShow

where the above is the usual inner product on ℂ

^{n}. - Let
_{1}≡∑_{j=1}^{n}.Show this is a norm on ℂ^{n}. Here x =^{T}. Showwhere the above is the usual inner product on ℂ

^{n}. Show there cannot exist an inner product such that this norm comes from the inner product as described above for inner product spaces. - Show that if is any norm on any vector space, then≤.
- Relax the assumptions in the axioms for the inner product. Change the axiom about ≥ 0 and equals 0 if and only if x = 0 to simply read≥ 0. Show the Cauchy Schwarz inequality still holds in the following form.≤
^{1∕2}^{1∕2}. - Let H be an inner product space and let
_{k=1}^{n}be an orthonormal basis for H. Show - Let the vector space V consist of real polynomials of degree no larger than 3. Thus a
typical vector is a polynomial of the form a + bx + cx
^{2}+ dx^{3}. For p,q ∈ V define the inner product,≡∫_{0}^{1}pqdx. Show this is indeed an inner product. Then state the Cauchy Schwarz inequality in terms of this inner product. Showis a basis for V . Finally, find an orthonormal basis for V. This is an example of some orthonormal polynomials. - Let P
_{n}denote the polynomials of degree no larger than n− 1 which are defined on an interval. Letbe n distinct points in. Now define for p,q ∈ P_{n},Show this yields an inner product on P

_{n}. Hint: Most of the axioms are obvious. The one which says= 0 if and only if p = 0 is the only interesting one. To verify this one, note that a nonzero polynomial of degree no more than n − 1 has at most n − 1 zeros. - Let Cdenote the vector space of continuous real valued functions defined on. Let the inner product be given as
Show this is an inner product. Also let V be the subspace described in Problem 9. Using the result of this problem, find the vector in V which is closest to x

^{4}. - A regular Sturm Liouville problem involves the differential equation, for an unknown function of
x which is denoted here by y,
and it is assumed that p

,q> 0 for any t ∈and also there are boundary conditions,There is an immense theory connected to these important problems. The constant, λ is called an eigenvalue. Show that if y is a solution to the above problem corresponding to λ = λ

_{1}and if z is a solution corresponding to λ = λ_{2}≠λ_{1}, then(13.5) and this defines an inner product. Hint: Do something like this:

and then integrate. Use the boundary conditions to show that y

^{′}z− z^{′}y= 0 and y^{′}z− z^{′}y= 0 . The formula, 13.5 is called an orthogonality relation. It turns out there are typically infinitely many eigenvalues and it is interesting to write given functions as an infinite series of these “eigenfunctions”. - Consider the continuous functions defined on ,C. Show≡∫
_{0}^{π}fgdx is an inner product on this vector space. Show the functions_{n=1}^{∞}are an orthonormal set. What does this mean about the dimension of the vector space C? Now let V_{N}= span. For f ∈ Cfind a formula for the vector in V_{N}which is closest to f with respect to the norm determined from the above inner product. This is called the N^{th}partial sum of the Fourier series of f. An important problem is to determine whether and in what way this Fourier series converges to the function f. The norm which comes from this inner product is sometimes called the mean square norm. - Consider the subspace V ≡ kerwhere
Find an orthonormal basis for V. Hint: You might first find a basis and then use the Gram Schmidt procedure.

- The Gram Schmidt process starts with a basis for a subspace and produces an orthonormal basis for the same subspacesuch that
for each k. Show that in the case of ℝ

^{m}the QR factorization does the same thing. More specifically, ifand if

then the vectors

is an orthonormal set of vectors and for each k, - Verify the parallelogram identify for any inner product space,
Why is it called the parallelogram identity?

- Let H be an inner product space and let K ⊆ H be a nonempty convex subset. This means that if
k
_{1},k_{2}∈ K, then the line segment consisting of points of the formis also contained in K. Suppose for each x ∈ H, there exists Px defined to be a point of K closest to x. Show that Px is unique so that P actually is a map. Hint: Suppose z

_{1}and z_{2}both work as closest points. Consider the midpoint,∕2 and use the parallelogram identity of Problem 16 in an auspicious manner. - In the situation of Problem 17 suppose K is a closed convex subset and that H is complete. This
means every Cauchy sequence converges. Recall a sequence is a Cauchy sequence if for every ε > 0 there exists N
_{ε}such that whenever m,n > N_{ε}, it follows< ε. Letbe a sequence of points of K such thatThis is called a minimizing sequence. Show there exists a unique k ∈ K such that

and that k = Px. That is, there exists a well defined projection map onto the convex subset of H. Hint: Use the parallelogram identity in an auspicious manner to show

is a Cauchy sequence which must therefore converge. Since K is closed it follows this will converge to something in K which is the desired vector. - Let H be an inner product space which is also complete and let P denote the projection
map onto a convex closed subset, K. Show this projection map is characterized by the
inequality
for all k ∈ K. That is, a point z ∈ K equals Px if and only if the above variational inequality holds. This is what that inequality is called. This is because k is allowed to vary and the inequality continues to hold for all k ∈ K.

- Using Problem 19 and Problems 17 - 18 show the projection map, P onto a closed convex subset is
Lipschitz continuous with Lipschitz constant 1. That is
- Give an example of two vectors in ℝ
^{4}or ℝ^{3}x,y and a subspace V such that x ⋅ y = 0 but Px⋅Py≠0 where P denotes the projection map which sends x to its closest point on V . - Suppose you are given the data, ,,,. Find the linear regression line using the formulas derived above. Then graph the given data along with your regression line.
- Generalize the least squares procedure to the situation in which data is given and you desire to fit it
with an expression of the form y = af+ bg+ c where the problem would be to find a,b and c in order to minimize the error. Could this be generalized to higher dimensions? How about more functions?
- Let A ∈ℒwhere X and Y are finite dimensional vector spaces with the dimension of X equal to n. Define rank≡ dimand nullity≡ dim. Show that nullity+ rank= dim. Hint: Let
_{i=1}^{r}be a basis for kerand let_{i=1}^{r}∪_{i=1}^{n−r}be a basis for X. Then show that_{i=1}^{n−r}is linearly independent and spans AX. - Let A be an m×n matrix. Show the column rank of A equals the column rank of A
^{∗}A. Next verify column rank of A^{∗}A is no larger than column rank of A^{∗}. Next justify the following inequality to conclude the column rank of A equals the column rank of A^{∗}.Hint: Start with an orthonormal basis,

_{j=1}^{r}of Aand verify_{j=1}^{r}is a basis for A^{∗}A. - Let A be a real m × n matrix and let A = QR be the QR factorization with Q orthogonal and R
upper triangular. Show that there exists a solution x to the equation
and that this solution is also a least squares solution defined above such that A

^{T}Ax = A^{T}b. - Here are three vectors in ℝ
^{4}:^{T},^{T},^{T}. Find the three dimensional volume of the parallelepiped determined by these three vectors. - Here are two vectors in ℝ
^{4}:^{T},^{T}. Find the volume of the parallelepiped determined by these two vectors. - Here are three vectors in ℝ
^{2}:^{T},^{T},^{T}. Find the three dimensional volume of the parallelepiped determined by these three vectors. Recall that from the above theorem, this should equal 0. - Find the equation of the plane through the three points ,,.
- Let T map a vector space V to itself. Explain why T is one to one if and only if T is onto. It is in the text, but do it again in your own words.
- ↑Let all matrices be complex with complex field of scalars and let A be an n × n matrix and B a
m×m matrix while X will be an n×m matrix. The problem is to consider solutions to Sylvester’s
equation. Solve the following equation for X
where C is an arbitrary n × m matrix. Show there exists a unique solution if and only if σ

∩ σ= ∅. Hint: If qis a polynomial, show first that if AX − XB = 0, then qX − Xq= 0. Next define the linear map T which maps the n × m matrices to the n × m matrices as follows.Show that the only solution to TX = 0 is X = 0 so that T is one to one if and only if σ

∩ σ= ∅. Do this by using the first part for qthe characteristic polynomial for B and then use the Cayley Hamilton theorem. Explain why q^{−1}exists if and only if the condition σ∩ σ= ∅. - Compare Definition 13.5.2 with the Binet Cauchy theorem, Theorem 8.4.5. What is the geometric meaning of the Binet Cauchy theorem in this context?
- Let U,H be finite dimensional inner product spaces. (More generally, complete inner product spaces.)
Let A be a linear map from U to H. Thus AU is a subspace of H. For g ∈ AU, define A
^{−1}g to be the unique element ofwhich is closest to 0. Then define_{AU}≡_{U}. Show that this is a well defined inner product. Let U,H be finite dimensional inner product spaces. (More generally, complete inner product spaces.) Let A be a linear map from U to H. Thus AU is a subspace of H. For g ∈ AU, define A^{−1}g to be the unique element ofwhich is closest to 0. Then define_{AU}≡_{U}. Show that this is a well defined inner product and that if A is one to one, then_{AU}=_{U}and_{AU}=_{U}. - For f a piecewise continuous function,
where S

_{n}fdenotes the n^{th}partial sum of the Fourier series. Recall that this Fourier series was of the formShow this can be written in the form

where

This is called the Dirichlet kernel. Show that

For V the vector space of piecewise continuous functions, define S

_{n}: VV byShow that S

_{n}is a linear transformation. (In fact, S_{n}f is not just piecewise continuous but infinitely differentiable. Why?) Explain why ∫_{−π}^{π}D_{n}dt = 1. Hint: To obtain the formula, do the following._{n}. - ↑Let V be an inner product space and let U be a finite dimensional subspace with an orthonormal
basis
_{i=1}^{n}. If y ∈ V, showNow suppose that

_{k=1}^{∞}is an orthonormal set of vectors of V . Explain whyWhen applied to functions, this is a special case of the Riemann Lebesgue lemma.

- ↑Let f be any piecewise continuous real function which is bounded on . Show, using the above problem, that