Appendix C Solutions to Selected Exercises
1 Vector spaces
1.1 Definition and examples
Exercises
1.1.4.
1.1.5.
1.1.6.
1.2 Properties
Exercise 1.2.2.
1.2.2.a
1.2.2.b
1.2.2.c
1.2.2.d
1.3 Subspaces
Exercises
1.3.2.
1.3.3.
1.3.4.
1.4 Span
Exercises
1.4.2.
1.4.3.
1.4.4.
1.4.7.
1.6 Linear Independence
Exercises
1.6.1.
Solution.
We set up a matrix and reduce:
Notice that this time we don’t get a unique solution, so we can conclude that these vectors are not independent. Furthermore, you can probably deduce from the above that we have Now suppose that In how many ways can we write as a linear combination of these vectors?
1.6.2.
Solution.
In each case, we set up the defining equation for independence, collect terms, and then analyze the resulting system of equations. (If you work with polynomials often enough, you can probably jump straight to the matrix. For now, let’s work out the details.)
And in this case, we don’t even need to ask the computer. The first equation gives right away, and putting that into the third equation gives and the second equation then gives
Since is the only solution, the set is independent.
1.6.3.
1.6.8.
1.6.9.
1.6.10.
1.6.11.
1.7 Basis and dimension
Exercises
1.7.1.
1.7.1.a
1.7.1.b
1.7.1.c
Solution.
If then is an even polynomial, and therefore for (If you didn’t know this it’s easily verified: if
we can immediately cancel from each side, and since we can cancel as well. This leaves or which implies that )
It follows that the set spans and since this is a subset of the standard basis of it must be independent, and is therefore a basis of letting us conclude that
1.7.2.
1.7.4.
1.7.5.
1.7.9.
Solution.
(a) The dimension of cannot exceed the dimension of since is contained in is non-zero, and thus its dimension cannot be 0. Hence 1, 2, or 3 are the possible dimensions of
(b) If then is properly contained in and the dimension of is strictly less than the dimension of Thus only 1 or 2 are possible dimensions of .
1.7.10.
1.7.11.
1.8 New subspaces from old
Exercise 1.8.7.
1.8.7.a
1.8.7.b
Exercises
1.8.1.
1.8.1.a
1.8.1.b
1.8.2.
1.8.2.a
1.8.2.b
Solution.
Since any complement of must have dimension 2. We therefore need to find two independent vectors that do not belong to
Recall that is defined by two conditions: and Therefore, a vector is not in if or This suggests that we define two vectors, each of which violates one of these conditions.
For the first, let us take This is not in because For the second, let us take This is not in because We know that is linearly independent, because each vector has nonzero entries in different positions. Therefore, is a complement of since it is spanned by vectors not in and it has the correct dimension.
1.8.4.
2 Linear Transformations
2.1 Definition and examples
Exercises
2.1.1.
Solution.
2.1.2.
2.1.3.
2.1.4.
2.1.5.
2.1.6.
2.1.7.
2.1.8.
2.1.9.
2.1.10.
2.1.11.
2.2 Kernel and Image
Exercises
2.2.1.
2.2.1.a
2.2.1.b
2.2.1.c
2.2.2.
2.2.3.
2.2.4.
2.2.5.
2.2.6.
2.2.7.
2.2.8.
2.2.9.
2.2.10.
2.2.11.
Solution.
Solution:
and hence the dimension of the range is 2.
(b) The rank-nullity theorem implies that the dimension of the kernel is
and it is easy to check that these two vectors are linearly independent. Therefore, the dimension of is 2.
2.2.12.
2.2.13.
2.2.13.a
Solution.
Conversely, suppose Choose a basis of and a basis of where By Theorem 2.1.8, there exists a linear transformation with for (The main point here is that we run out of basis vectors for before we run out of basis vectors for ) This map is injective: if write Then
Since is a subset of a basis, it’s independent. Therefore, the scalars must all be zero, and therefore
2.2.13.b
Solution.
Suppose is surjective. Then so
Conversely, suppose Again, choose a basis of and a basis of where this time, We can define a linear transformation as follows:
It’s easy to check that this map is a surjection: given we can write it in terms of our basis as Using these same scalars, we can define such that
Note that it’s not important how we define when The point is that this time, we run out of basis vectors for before we run out of basis vectors for Once each vector in the basis of is in the image of we’re guaranteed that is surjective, and we can define the value of on any remaining basis vectors however we want.
2.3 Isomorphisms, composition, and inverses
2.3.2 Composition and inverses
Exercises
2.3.2.2.
2.3.2.4.
2.3.2.5.
3 Orthogonality and Applications
3.1 Orthogonal sets of vectors
3.1.3 Exercises
3.1.3.1.
3.1.3.2.
Solution.
If then the result follows immediately from the dot product formula in Definition 3.1.1. Conversely, suppose for each Since the span there must exist scalars such that But then
3.1.3.3.
Solution.
To find a fourth vector, we proceed as follows. Let We want to be orthogonal to the three vectors in our set. Computing dot products, we must have:
This is simply a homogeneous system of three equations in four variables. Using the Sage cell below, we find that our vector must satisfy
One possible nonzero solution is to take giving We’ll leave the verification that this vector works as an exercise.
3.1.3.4.
3.1.3.5.
3.1.3.6.
3.1.3.7.
3.1.3.8.
3.1.3.9.
3.2 The Gram-Schmidt Procedure
Exercises
3.2.1.
3.2.2.
3.2.3.
3.2.4.
3.3 Orthogonal Projection
Exercises
3.3.5.
3.3.6.
3.3.7.
3.3.8.
3.3.9.
3.3.10.
3.5 Project: dual basis
Exercise 3.5.1.
Solution.
We know that Since there are vectors in the dual basis, it’s enough to show that they’re linearly independent. Suppose that
for some scalars
By the definition of the dual basis, for each we have
Thus, for each and therefore, the are linearly independent.
Exercise 3.5.2.
Solution.
There are two things to check. First, we show that for each Since and it follows that is a map from to But we must also show that it’s linear.
Next, we need to show that is a linear map. Let and let be a scalar. We have:
and
This follows from the vector space structure on any space of functions. For a vector we have
Exercise 3.5.3.
Exercise 3.5.4.
Exercise 3.5.5.
4 Diagonalization
4.1 Eigenvalues and Eigenvectors
Exercises
4.1.1.
4.1.2.
4.1.3.
4.1.4.
4.1.5.
4.1.6.
4.1.7.
4.1.8.
4.1.9.
4.2 Diagonalization of symmetric matrices
Exercise 4.2.1.
Exercises
4.2.1.
4.2.2.
4.2.3.
4.4 Quadratic forms
Exercises
4.4.1.
4.4.2.
4.4.3.
4.5 Diagonalization of complex matrices
4.5.2 Complex matrices
Exercise 4.5.8.
4.5.3 Exercises
4.5.3.1.
4.5.3.2.
4.5.3.3.
4.5.3.4.
4.5.3.5.
4.5.3.6.
4.5.3.7.
4.6 Project: linear dynamical systems
Exercise 4.6.1.
4.6.1.a
4.6.1.b
4.6.1.c
Exercise 4.6.2.
4.6.2.a
4.6.2.b
4.6.2.c
Exercise 4.6.3.
4.6.3.a
4.6.3.b
4.6.3.c
Exercise 4.6.5. Markov Chains.
4.6.5.a
4.6.5.b
Solution.
4.6.5.c
Exercise 4.6.6. Properties of stochastic matrices.
4.6.6.a
4.6.6.b
4.6.6.c
4.6.6.d
Solution.
Suppose is an eigenvector of corresponding to the eigenvalue 1, such that is not a multiple of Then the entries of cannot all be equal. Let be the smallest entry of and note that we must have for some
4.6.6.f
4.7 Matrix Factorizations and Eigenvalues
4.7.3 Exercises
4.7.3.1.
4.7.3.2.
4.7.3.3.
4.7.3.4.
5 Change of Basis
5.1 The matrix of a linear transformation
Exercise 5.1.2.
Solution.
It’s clear that since the only way to write the zero vector in in terms of (or, indeed, any independent set) is to set all the scalars equal to zero.
This shows that is linear. To see that is an isomorphism, we can simply note that takes the basis to the standard basis of Alternatively, we can give the inverse: is given by
Exercises
5.1.1.
5.1.2.
5.1.3.
5.1.4.
5.1.5.
5.1.6.
5.2 The matrix of a linear operator
Exercise 5.2.4.
Exercises
5.2.1.
5.2.2.
5.7 Jordan Canonical Form
Exercise 5.7.7.
Solution.
With respect to the standard basis of the matrix of is
We find (perhaps using the Sage cell provided below, and the code from the example above) that
so has eigenvalues (of multiplicity ), and (of multiplicity ).
We tackle the repeated eigenvalue first. The reduced row-echelon form of is given by
so
We now attempt to solve We find
so where We take as our first generalized eigenvector. Note that so as expected.
Finally, we look for an element of of the form where We set up and solve the system as follows:
so where
Our basis of column vectors is therefore Note that by design,
The corresponding Jordan basis for is
and with respect to this basis, we have