Section 5.2 The matrix of a linear operator
Recall that a linear transformation is referred to as a linear operator. Recall also that two matrices and are similar if there exists an invertible matrix such that and that similar matrices have a lot of properties in common. In particular, if is similar to then and have the same trace, determinant, and eigenvalues. One way to understand this is the realization that two matrices are similar if they are representations of the same operator, with respect to different bases.
Since the domain and codomain of a linear operator are the same, we can consider the matrix where and are the same ordered basis. This leads to the next definition.
The following result collects several useful properties of the -matrix of an operator. Most of these were already encountered for the matrix of a transformation, although not all were stated formally.
Theorem 5.2.2.
Let be a linear operator, and let be a basis for Then
-
for all -
If
is another operator, then -
is an isomorphism if and only if is invertible. -
If
is an isomorphism, then -
Example 5.2.3.
Solution.
We compute
We now need to write each of these in terms of the basis We can do this by working out how to write each polynomial above in terms of Or we can be systematic.
Let be the matrix whose columns are given by the coefficient representations of the polynomials in with respect to the standard basis For we need to solve the equation
for scalars But this is equivalent to the system
which, in turn, is equivalent to the matrix equation
that is, Thus,
Similarly, and Using the computer, we find:
xxxxxxxxxx
import sympy as sy
sy.init_printing()
P = sy.Matrix(3,3,[1,0,2,-1,1,0,0,3,-1])
M = sy.Matrix(3,3,[1,0,2,0,4,1,1,0,2])
P**-1, P**-1*M
Letβs confirm that this works. Suppose we have
Then and we find
On the other hand,
The results agree, but possibly leave us a little confused.
In general, given an ordered basis for a vector space with standard basis if we let
then
since multiplying by converts vectors written in terms of to vectors written in terms of
As we saw above, this gives us the result, but doesnβt shed much light on the problem, unless we have an easy way to write vectors in terms of the basis Letβs revisit the problem. Instead of using the given basis letβs use the standard basis We quickly find
so with respect to the standard basis, Now, recall that
and note that for any polynomial But
so we get
Now we have a much more efficient method for arriving at the matrix The matrix is easy to determine, the matrix is easy to determine, and with the help of the computer, itβs easy to compute
xxxxxxxxxx
M0 = sy.Matrix(3,3,[1,0,0,1,1,1,1,0,0])
P**-1*M0*P
Exercise 5.2.4.
Determine the matrix of the operator given by
with respect to the ordered basis
(You may want to use the Sage cell below for computational assistance.)
xxxxxxxxxx
β
The matrix used in the above examples is known as a change matrix. If the columns of are the coefficient vectors of with respect to another basis then we have
In other words, is the matrix of the identity transformation where we use the basis for the domain, and the basis for the codomain.
Definition 5.2.5.
Theorem 5.2.6.
Exercise 5.2.7.
Prove Theorem 5.2.6.
Example 5.2.8.
Solution.
Finding this matrix requires us to first write the vectors in in terms of the vectors in However, itβs much easier to do this the other way around. We easily find
and by Theorem 5.2.6, we have
Note that the change matrix notation is useful for linear transformations between different vector spaces as well. Recall Theorem 5.1.6, which gave the result
where (using our new notation) and In this notation, we have
which seems more intiutive.
The above results give a straightforward procedure for determining the matrix of any operator, with respect to any basis, if we let be the standard basis. The importance of these results is not just their computational simplicity, however. The most important outcome of the above is that if and give the matrix of with respect to two different bases, then
so that the two matrices are similar.
Recall from Theorem 4.1.10 that similar matrices have the same determinant, trace, and eigenvalues. This means that we can unambiguously define the determinant and trace of an operator, and that we can compute eigenvalues of an operator using any matrix representation of that operator.
Exercises Exercises
1.
2.
(a)
(b)
(c)
(d)
Reminder:
On paper, confirm that
You have attempted 1 of 5 activities on this page.