Section 5.4 Direct Sums and Invariant Subspaces
This section continues the discussion of direct sums (from Section 1.8) and invariant subspaces (from Section 4.1), to better understand the structure of linear operators.
Subsection 5.4.1 Invariant subspaces
For any operator there are four subspaces that are always -invariant:
Of course, some of these subspaces might be the same; for example, if is invertible, then and
Exercise 5.4.2.
A subspace is -invariant if does not map any vectors in outside of Notice that if we shrink the domain of to then we get an operator from to since the image is contained in
Definition 5.4.3.
Let be a linear operator, and let be a -invariant subspace. The restriction of to denoted is the operator defined by for all
Exercise 5.4.4.
True.
- The definition of a function includes its domain and codomain. Since the domain of
is different from that of they are not the same function. False.
- The definition of a function includes its domain and codomain. Since the domain of
is different from that of they are not the same function.
A lot can be learned by studying the restrictions of an operator to invariant subspaces. Indeed, the textbook by Axler does almost everything from this point of view. One reason to study invariant subspaces is that they allow us to put the matrix of into simpler forms.
Theorem 5.4.5.
Let be a linear operator, and let be a -invariant subspace. Let be a basis of and extend this to a basis
of Then the matrix with respect to this basis has the block-triangular form
for some matrix
Reducing a matrix to block triangular form is useful, because it simplifies computations such as determinants and eigenvalues (and determinants and eigenvalues are computationally expensive). In particular, if a matrix has the block form
where the diagonal blocks are square matrices, then and
Subsection 5.4.2 Eigenspaces
An important source of invariant subspaces is eigenspaces. Recall that for any real number and any operator we define
For most values of weโll have The values of for which is non-trivial are precisely the eigenvalues of Note that since similar matrices have the same characteristic polynomial, any matrix representation will have the same eigenvalues. They do not generally have the same eigenspaces, but we do have the following.
Theorem 5.4.6.
Let be a linear operator. For any scalar the eigenspace is -invariant. Moreover, for any ordered basis of the coefficient isomorphism induces an isomorphism
In other words, the two eigenspaces are isomorphic, although the isomorphism depends on a choice of basis.
Subsection 5.4.3 Direct Sums
Recall that for any subspaces of a vector space the sets
are subspaces of Saying that means that can be written as a sum of a vector in and a vector in However, this sum may not be unique. If and then we can write giving two different representations of a vector as an element of
We proved in Theorem 1.8.9 in Section 1.8 that for any there exist unique vectors and such that if and only if
Typically we are interested in the case that the two subspaces sum to Recall from Definition 1.8.11 that if we say that is a complement of We also say that is a direct sum decomposition of Of course, the orthogonal complement of a subspace is a complement in this sense, if is equipped with an inner product. (Without an inner product we have no concept of โorthogonalโ.) But even if we donโt have an inner product, finding a complement is not too difficult, as the next example shows.
Example 5.4.7. Finding a complement by extending a basis.
The easiest way to determine a direct sum decomposition (or equivalently, a complement) is through the use of a basis. Suppose is a subspace of with basis and extend this to a basis
of Let Then since the first vectors in belong to and the remaining vectors in belong to And since if then and so we have
which gives
so by the linear independence of showing that
Conversely, if and we have bases of and of then
is a basis for Indeed, spans since every element of can be written as with Independence follows by reversing the argument above: if
then and equality is only possible if both sides belong to Since is independent, the have to be zero, and since is independent, the have to be zero.
The argument given in the second part of Example 5.4.7 has an immediate, but important consequence.
Theorem 5.4.8.
Example 5.4.9.
Suppose where and are -invariant subspaces for some operator Let and let be bases for and respectively. Determine the matrix of with respect to the basis of
You have attempted 1 of 3 activities on this page.
