Skip to main content

Appendix C Solutions to Selected Exercises

1 Vector spaces
1.1 Definition and examples

Exercises

1.1.1.
1.1.1.a
Solution.
To get a vector space structure on V=(0,), we will define an addition on V by
xy=xy,
where the right hand side is the usual product of real numbers, and for cR and xV, we will define a scalar multiplication by
cx=xc.
1.1.1.b
Solution.
For any x,y,zV, we have:
xy=xy=yx=yxx(yz)=xyz=x(yz)=(xy)z=xyz=(xy)z.
1.1.1.d
Solution.
Let x be any element of V. Since x>0, we know in particular that x0, so we can define x=1/x, where 1/x denotes the usual reciprocal of a real number. We then have
x(x)=x(1/x)=1,
and we saw above that 1 is the identity element of V.
1.1.1.e
Solution.
We assume some properties of exponents from high school algebra:
c(xy)=(xy)c=xcyc=cxcy.
1.1.1.f
Solution.
This again follows from properties of exponents:
(c+d)x=xc+d=xcxd=cxdx.
1.1.1.g
Solution.
We have
c(dx)=c(xd)=(xd)c=xdc=xcd=(cd)x.
1.1.1.h
Solution.
The last one is possibly the easiest: 1x=x1=x.

1.2 Properties

Exercise 1.2.2.

1.2.2.a
Solution.
Suppose u+v=u+w. By adding u on the left of each side, we obtain:
u+(u+v)=u+(u+w)(u+u)+v=(u+u)+w by A30+v=0+w by A5v=w by A4,
which is what we needed to show.
1.2.2.b
Solution.
We have c0=c(0+0)=c0+c0, by A4 and S2, respectively. Adding c0 to both sides gives us
c0+c0=c0+(c0+c0).
Using associativity (A3), this becomes
c0+c0=(c0+c0)+c0,
and since c0+c0=0 by A5, we get 0=0+c0. Finally, we apply A4 on the right hand side to get 0=c0, as required.
1.2.2.c
Solution.
Suppose there are two vectors 01,02 that act as additive identities. Then
01=01+02 since v+02=v for any v=02+01 by axiom A202 since v+01=v for any v
So any two vectors satisfying the property in A4 must, in fact, be the same.
1.2.2.d
Solution.
Let vV, and suppose there are vectors w1,w2V such that v+w1=0 and v+w2=0. Then
w1=w1+0 by A4=w1+(v+w2) by assumption=(w1+v)+w2 by A3=(v+w1)+w2 by A2=0+w2 by assumptionw2 by A4.

1.3 Subspaces

Exercises

1.3.2.
Answer 1.
H contains the zero vector of V
Answer 2.
[1000],[1001]
Answer 3.
2,[1001]
Answer 4.
H is not a subspace of V
1.3.3.
Answer 1.
H contains the zero vector of V
Answer 2.
CLOSED
Answer 3.
CLOSED
Answer 4.
H is a subspace of V
1.3.4.
Answer 1.
H does not contain the zero vector of V
Answer 2.
[1000],[0001]
Answer 3.
2,[1000]
Answer 4.
H is not a subspace of V

1.4 Span

Exercises

1.6 Linear Independence

Exercises

1.6.1.
Solution.
We set up a matrix and reduce:
Notice that this time we don’t get a unique solution, so we can conclude that these vectors are not independent. Furthermore, you can probably deduce from the above that we have 2v1+3v2v3=0. Now suppose that wspan{v1,v2,v3}. In how many ways can we write w as a linear combination of these vectors?
1.6.2.
Solution.
In each case, we set up the defining equation for independence, collect terms, and then analyze the resulting system of equations. (If you work with polynomials often enough, you can probably jump straight to the matrix. For now, let’s work out the details.)
Suppose
r(x2+1)+s(x+1)+tx=0.
Then rx2+(s+t)x+(r+s)=0=0x2+0x+0, so
r=0s+t=0r+s=0.
And in this case, we don’t even need to ask the computer. The first equation gives r=0 right away, and putting that into the third equation gives s=0, and the second equation then gives t=0.
Since r=s=t=0 is the only solution, the set is independent.
Repeating for S2 leads to the equation
(r+2s+t)x2+(r+s+5t)x+(3r+5s+t)1=0.
This gives us:
1.6.3.
Solution.
We set a linear combination equal to the zero vector, and combine:
a[1001]+b[1111]+c[1111]+d[0110]=[0000][a+b+cb+cdb+cda+b+c]=[0000].
We could proceed, but we might instead notice right away that equations 1 and 4 are identical, and so are equations 2 and 3. With only two distinct equations and 4 unknowns, we’re certain to find nontrivial solutions.
1.6.11.
Answer.
0.666667;0.75;1

1.7 Basis and dimension

Exercises

1.7.1.
1.7.1.a
Solution.
By definition, U1=span{1+x,x+x2}, and these vectors are independent, since neither is a scalar multiple of the other. Since there are two vectors in this basis, dimU1=2.
1.7.1.b
Solution.
If p(1)=0, then p(x)=(x1)q(x) for some polynomial q. Since U2 is a subspace of P2, the degree of q is at most 2. Therefore, q(x)=ax+b for some a,bR, and
p(x)=(x1)(ax+b)=a(x2x)+b(x1).
Since p was arbitrary, this shows that U2=span{x2x,x1}.
The set {x2x,x1} is also independent, since neither vector is a scalar multiple of the other. Therefore, this set is a basis, and dimU2=2.
1.7.1.c
Solution.
If p(x)=p(x), then p(x) is an even polynomial, and therefore p(x)=a+bx2 for a,bR. (If you didn’t know this it’s easily verified: if
a+bx+cx2=a+b(x)+c(x)2,
we can immediately cancel a from each side, and since (x)2=x2, we can cancel cx2 as well. This leaves bx=bx, or 2bx=0, which implies that b=0.)
It follows that the set {1,x2} spans U3, and since this is a subset of the standard basis {1,x,x2} of P2, it must be independent, and is therefore a basis of U3, letting us conclude that dimU3=2.
1.7.2.
Solution.
Again, we only need to add one vector from the standard basis {1,x,x2,x3}, and it’s not too hard to check that any of them will do.
1.7.5.
Answer.
[1001],[0100],[0010]
1.7.9.
Answer 1.
1,2, or 3
Answer 2.
1 or 2
Solution.
(a) The dimension of S1 cannot exceed the dimension of S2 since S1 is contained in S2. S1 is non-zero, and thus its dimension cannot be 0. Hence 1, 2, or 3 are the possible dimensions of S1.
(b) If S1S2, then S1 is properly contained in S2, and the dimension of S1 is strictly less than the dimension of S2. Thus only 1 or 2 are possible dimensions of S1 .
1.7.10.
Answer 1.
Answer 2.
not a basis for P₂
Answer 3.
7x2x2,2(9x2+x)
1.7.11.
Answer 1.
Answer 2.
basis for P₂
Answer 3.
7x210x5,19x27x7,(3x+1)

1.8 New subspaces from old

Exercise 1.8.7.

1.8.7.a
Solution.
If (x,y,z)U, then z=0, and if (x,y,z)W, then x=0. Therefore, (x,y,z)UW if and only if x=z=0, so UW={(0,y,0)|yR}.
1.8.7.b
Solution.
There are in fact infinitely many ways to do this. Three possible ways include:
v=(1,1,0)+(0,0,1)=(1,0,0)+(0,1,1)=(1,12,0)+(0,12,1).

Exercises

1.8.1.
1.8.1.a
Solution.
Since p(1)=0, we know that p(x)=(x1)q(x), for some q(x)=ax2+bx+c. Thus, p(x)=ax2(x1)+bx(x1)+c(x1), so a basis is given by {(x1),x(x1),x2(x1)}.
(Another option is {(x1),(x1)2,(x1)3}.)
1.8.1.b
Solution.
Since dimU=3 and dimP3(R)=4, we know that any complement of U must be one-dimensional.
Therefore, a basis for a complement W of U is given by any polynomial in P3(R) that is not in U. In particular, we can choose any polynomial p(x) with p(1)0; for example, p(x)=x. Therefore, W={ax|aR} is a complement of U.
1.8.2.
1.8.2.a
Solution.
If uU, then
u=(3x3,x2,x3,x4,3x25x4)=(0,x2,0,0,3x2)+(3x3,0,x3,0,0)+(0,0,0,x4,5x4)=x2(0,1,0,0,3)+x3(3,0,1,0,0)+x4(0,0,0,1,5).
This shows that U=span{(0,1,0,0,3),(3,0,1,0,0),(0,0,0,1,5)}. These vectors are also linearly independent, since each one has its first leading (nonzero) entry in a different position. (Think about what this implies for the RREF of the resulting matrix.)
1.8.2.b
Solution.
Since dimU=3, any complement of U must have dimension 2. We therefore need to find two independent vectors that do not belong to U.
Recall that U is defined by two conditions: x1=3x3 and 3x25x4=x5. Therefore, a vector is not in U if x1x3, or x53x25x4. This suggests that we define two vectors, each of which violates one of these conditions.
For the first, let us take x=(1,0,1,0,0). This is not in U because 13(1). For the second, let us take y=(0,1,0,1,1). This is not in U because 13(1)5(1). We know that {x,y} is linearly independent, because each vector has nonzero entries in different positions. Therefore, W=span{x,y} is a complement of U, since it is spanned by vectors not in U, and it has the correct dimension.

2 Linear Transformations
2.1 Definition and examples

Exercises

2.1.1.
Solution.
We need to find scalars a,b,c such that
2x+3x2=a(x+2)+b(1)+c(x2+x).
We could set up a system and solve, but this time it’s easy enough to just work our way through. We must have c=3, to get the correct coefficient for x2. This gives
2x+3x2=a(x+2)+b(1)+3x2+3x.
Now, we have to have 3x+ax=x, so a=4. Putting this in, we get
2x+3x2=4x8+b+3x2+3x.
Simiplifying this leaves us with b=10. Finally, we find:
T(2x+3x2)=T(4(x+2)+10(1)+3(x2+x))=4T(x+2)+10T(1)+3T(x2+x)=4(1)+10(5)+3(0)=46.
2.1.3.
Answer 1.
[a11+b11a21+b21a12+b12a22+b22]
Answer 2.
[a11a21a12a22]+[b11b21b12b22]
Answer 3.
Yes, they are equal
Answer 4.
[ca11ca21ca12ca22]
Answer 5.
c([a11a21a12a22])
Answer 6.
Yes, they are equal
Answer 7.
f is a linear transformation
2.1.4.
Answer 1.
2x+2y3
Answer 2.
2x3+2y3
Answer 3.
No, they are not equal
Answer 4.
2cx3
Answer 5.
c(2x3)
Answer 6.
No, they are not equal
Answer 7.
f is not a linear transformation
2.1.5.
Answer 1.
6T(v1)T(v2)
Answer 2.
6(w1+w2)+318w2
2.1.8.
Answer.
[9x+4y9x+4y]
2.1.10.
2.1.10.a
Answer.
6x2+10x+4
2.1.10.b
Answer.
32x2+36x+18
2.1.10.d
Answer.
a(2x2+2x+0)+b(32x2+36x+18)+c(6x2+10x+4)

2.2 Kernel and Image

Exercises

2.2.1.
2.2.1.a
Solution.
We have T(0)=0 since 0T=0. Using proerties of the transpose and matrix algebra, we have
T(A+B)=(A+B)(A+B)T=(AAT)+(BBT)=T(A)+T(B)
and
T(kA)=(kA)(kA)T=kAkAT=k(AAT)=kT(A).
2.2.1.b
Solution.
It’s clear that if AT=A, then T(A)=0. On the other hand, if T(A)=0, then AAT=0, so A=AT. Thus, the kernel consists of all symmetric matrices.
2.2.1.c
Solution.
If B=T(A)=AAT, then
BT=(AAT)T=ATA=B,
so certainly every matrix in imA is skew-symmetric. On the other hand, if B is skew-symmetric, then B=T(12B), since
T(12B)=12T(B)=12(BBT)=12(B(B))=B.
2.2.2.
Answer.
[1101],[2111]
2.2.5.
Answer 1.
36,13
Answer 2.
[333210]
Answer 3.
3x+3y+3z,2x+1y+0z
Answer 4.
1,2,1
Answer 5.
1,0,0,1
Answer 6.
surjective
2.2.7.
Answer 1.
[5133]
Answer 2.
bijective
Answer 3.
[0.1666670.05555560.1666670.277778]
2.2.8.
Answer 1.
[56910]
Answer 2.
[433512]
Answer 3.
injective
2.2.9.
Answer 1.
[21287]
Answer 2.
[915122035]
Answer 3.
none of these
2.2.11.
Answer 1.
Answer 2.
Answer 3.
Solution.
Solution:
(a) Since e1, e2, e3 spans R3, we get that L(R3) is spanned by L(e1), L(e2), L(e3). So
L(R3)=span{L(e1),L(e2),L(e3)}=span{[12036],[24120],[12036]}=span{[103],[210],[103]}=span{[103],[210])
and hence the dimension of the range is 2.
(b) The rank-nullity theorem implies that the dimension of the kernel is 32=1.
(c) Notice that
L(S)=span{L(11e1),L(24e2+24e3)}=span{L(e1),L(e2)+L(e3))}=span{[12036],[121236]}
and it is easy to check that these two vectors are linearly independent. Therefore, the dimension of L(S) is 2.
2.2.13.
2.2.13.a
Solution.
Suppose T:VW is injective. Then kerT={0}, so
dimV=0+dimimTdimW,
since imT is a subspace of W.
Conversely, suppose dimVdimW. Choose a basis {v1,,vm} of V, and a basis {w1,,wn} of W, where mn. By Theorem 2.1.8, there exists a linear transformation T:VW with T(vi)=wi for i=1,,m. (The main point here is that we run out of basis vectors for V before we run out of basis vectors for W.) This map is injective: if T(v)=0, write v=c1v1++cmvm. Then
0=T(v)=T(c1v1++cmvm)=c1T(v1)++cmT(vm)=c1w1++cmwm.
Since {w1,,wm} is a subset of a basis, it’s independent. Therefore, the scalars ci must all be zero, and therefore v=0.
2.2.13.b
Solution.
Suppose T:VW is surjective. Then dimimT=dimW, so
dimV=dimkerT+dimWdimW.
Conversely, suppose dimVdimW. Again, choose a basis {v1,,vm} of V, and a basis {w1,,wn} of W, where this time, mn. We can define a linear transformation as follows:
T(v1)=w1,,T(vn)=wn, and T(vj)=0 for j>n.
It’s easy to check that this map is a surjection: given wW, we can write it in terms of our basis as w=c1w1++cnwn. Using these same scalars, we can define v=c1v1++cnvnV such that T(v)=w.
Note that it’s not important how we define T(vj) when j>n. The point is that this time, we run out of basis vectors for W before we run out of basis vectors for V. Once each vector in the basis of W is in the image of T, we’re guaranteed that T is surjective, and we can define the value of T on any remaining basis vectors however we want.

2.3 Isomorphisms, composition, and inverses
2.3.2 Composition and inverses

Exercises

2.3.2.2.
Solution.
Let w1,w2W. Then there exist v1,v2V with w1=T(v1),w2=T(v2). We then have
T1(w1+w2)=T1(T(v1)+T(v2))=T1(T(v1+v2))=v1+v2=T1(w1)+T1(w2).
For any scalar c, we similarly have
T1(cw1)=T1(cT(v1))=T1(T(cv1))=cv1=cT1(w1).
2.3.2.4.
Answer.
cx2+(a+4c)x+(4)a+b+(444)c
2.3.2.5.
Answer 1.
(9.5x+(4.5)y,2x+1y)
Answer 2.
(0.333333x+0.666667y+(0.666667)z,0.333333x+0.333333y+0.666667z,0.333333x+(0.333333)y+0.333333z)
Answer 3.
192292(4)
Answer 4.
22(4)422
Answer 5.
6+21(3)213
Answer 6.
16+3+2113
Answer 7.
1161(3)+13

3 Orthogonality and Applications
3.1 Orthogonal sets of vectors
3.1.3 Exercises

3.1.3.1.

Solution.
This is an exercise in properties of the dot product. We have
x+y2=(x+y)(x+y)=xx+xy+yx+yy=x2+2xy+y2.

3.1.3.2.

Solution.
If x=0, then the result follows immediately from the dot product formula in Definition 3.1.1. Conversely, suppose xvi=0 for each i. Since the vi span Rn, there must exist scalars c1,c2,,ck such that x=c1v1+c2v2++ckvk. But then
xx=x(c1v1+c2v2++ckvk)=c1(xv1)+c2(xv2)++ck(xvk)=c1(0)+c2(0)++ck(0)=0.

3.1.3.3.

Solution.
All three vectors are nonzero. To confirm the set is orthogonal, we simply compute dot products:
(1,0,1,0)(1,0,1,1)=1+0+1+0=0(1,0,1,1)(1,1,1,2)=1+01+2=0(1,0,1,0)(1,1,1,2)=1+01+0=0.
To find a fourth vector, we proceed as follows. Let x=(a,b,c,d). We want x to be orthogonal to the three vectors in our set. Computing dot products, we must have:
(a,b,c,d)(1,0,1,0)=a+c=0(a,b,c,d)(1,0,1,1)=a+c+d=0(a,b,c,d)(1,1,1,2)=a+bc+2d=0.
This is simply a homogeneous system of three equations in four variables. Using the Sage cell below, we find that our vector must satisfy a=12d,b=3d,c=12d.
One possible nonzero solution is to take d=2, giving x=(1,6,1,2). We’ll leave the verification that this vector works as an exercise.

3.1.3.4.

Solution.
We compute
(vx1x12)x1+(vx2x22)x2+(vx3x32)x3=42x1+93x2+287x3=2(1,0,1,0)3(1,0,1,1)4(1,1,1,2)=(1,4,3,11)=v,
so vspan{x1,x2,x3}.
On the other hand, repeating the same calculation with w, we find
(vx1x12)x1+(vx2x22)x2+(vx3x32)x3=12(1,0,1,0)53(1,0,1,1)+47(1,1,1,2)=(7342,47,11542,1121)w,
so wspan{x1,x2,x3}.
Soon, we’ll see that the quantity we computed when showing that wspan{x1,x2,x3} is, in fact, the orthogonal projection of w onto the subspace span{x1,x2,x3}.

3.1.3.6.

Answer 1.
Answer 2.
[0.7715170.3086070.3086070.46291]

3.1.3.7.

Answer.
115
Solution.
Note that the distributive property, together with symmetry, let us handle this dot product using what is essentially “FOIL”:
(5x3y)(x+5y)=(5x)x+(5x)(5y)+(3y)x+(3y)(5y)=5(xx)+(55)(xy)3(yx)+(35)(yy)=5x2+25xy3xy15y2=5(2)2+22(5)15(1)2=115.

3.2 The Gram-Schmidt Procedure

Exercises

3.2.3.
Answer 1.
[0031110]
Answer 2.
[101.816790.6717561.938932]
Answer 3.
[1.23978140.4356310.1610740.4649181.52044]
3.2.4.
Answer 1.
[1013],[1310]
Answer 2.
[3636],[1.545450.09090912.545452.09091]

3.3 Orthogonal Projection

Exercises

3.3.7.
Answer.
[16510],[25701]
3.3.8.
Answer.
5.50348614864795
3.3.9.
Answer.
[3.658141.262025.41247.85194]
3.3.10.
Answer.
[4.157890.84210520.0526]

3.5 Project: dual basis

Exercise 3.5.1.

Solution.
We know that dimV=dimV=n. Since there are n vectors in the dual basis, it’s enough to show that they’re linearly independent. Suppose that
c1ϕ1+c2ϕ2+cnϕn=0
for some scalars c1,c2,,cn.
This means that (c1ϕ1+c2ϕ2+cnϕn)(v)=0 for all vV; in particular, this must be true for the basis vectors v1,,vn.
By the definition of the dual basis, for each i=1,2,,n we have
(c1ϕ1+c2ϕ2+cnϕn)(vi)=0++0+ci(1)+0++0=ci=0.
Thus, ci=0 for each i, and therefore, the ϕi are linearly independent.

Exercise 3.5.2.

Solution.
There are two things to check. First, we show that T(ϕ)V for each ϕW. Since T:VW and ϕ:WR, it follows that Tϕ=ϕT is a map from V to R. But we must also show that it’s linear.
Given v1,v2V, we have
(Tϕ)(v1+v2)=ϕ(T(v1+v2))=ϕ(T(v1)+T(v2)) because T is linear=ϕ(T(v1))+ϕ(T(v2)) because ϕ is linear=(Tϕ)(v1)+(Tϕ)(v2).
Similarly, for any scalar c,
(Tϕ)(cv)=ϕ(T(cv))=ϕ(cT(v))=c(ϕ(T(v)))=c((Tϕ)(v)).
This shows that TϕV.
Next, we need to show that T:WV is a linear map. Let ϕ,ψW, and let c be a scalar. We have:
T(ϕ+ψ)=(ϕ+ψ)T=ϕT+ψT=Tϕ+Tψ,
and
T(cϕ)=(cϕ)T=c(ϕT)=cTϕ.
This follows from the vector space structure on any space of functions. For a vector vV, we have
(T(cϕ))(v)=(cϕ(T(v)))=c(ϕ(T(v)))=c((Tϕ)(v)).

Exercise 3.5.3.

Solution.
Let p be a polynomial. Then
(Dϕ)(p)=ϕ(D(p))=ϕ(p)=01p(x)dx.
By the Fundamental Theorem of Calculus (or a tedious calculation, if you prefer), we get
(Dϕ)(p)=p(1)p(0).

Exercise 3.5.4.

Solution.
Let ϕW. We have
(S+T)(ϕ)=ϕ(S+T)=ϕS+ϕT=Sϕ+Tϕ,
since ϕ is linear. Similarly,
(kS)(ϕ)=ϕ(kS)=k(ϕS)=k(Sϕ).
Finally, we have
(ST)ϕ=ϕ(ST)=ϕ(ST)=(ϕS)T=T(ϕS)=T(Sϕ)=(TS)(ϕ),.
since composition is associative.

Exercise 3.5.5.

Solution.
As per the hint, suppose ϕ=c1ϕ1+c2ϕ2+c3ϕ3+c4ϕ4, and that ϕU0. Then
ϕ(2a+b,3b,a,a2b)=c1ϕ1(2a+b,3b,a,a2b)+c2ϕ2(2a+b,3b,a,a2b)+c3ϕ3(2a+b,3b,a,a2b)+c4ϕ4(2a+b,3b,a,a2b)=c1(2a+b)+c2(3b)+c3(a)+c4(a2b)=a(2c1+c3+c4)+b(c1+3c22c4).
We wish for this to be zero for all possible values of a and b. Therefore, we must have
2c1+c3+c4=0c1+3c22c4=0.
Solving gives us c1=12c312c4 and c2=16c3+56c4, so
ϕ=(12c312c4)ϕ1+(16c3+56c4)ϕ2+c3ϕ3+c4ϕ4=c3(12ϕ1+16ϕ2+ϕ3)+c4(12ϕ1+56ϕ2+ϕ4).
This gives us the following basis for U0:
{ϕ312ϕ1+16ϕ2,ϕ412ϕ1+56ϕ3}.

4 Diagonalization
4.1 Eigenvalues and Eigenvectors

Exercises

4.2 Diagonalization of symmetric matrices

Exercise 4.2.1.

Solution.
Take x=ei and y=ej, where {e1,,en} is the standard basis for Rn. Then with A=[aij] we have
aij=ei(Aej)=(Aei)ej=aji,
which shows that AT=A.

Exercises

4.4 Quadratic forms

Exercises

4.4.1.
Answer.
[74.534.512323]
4.4.2.
Answer.
9x125x22+4x3216x1x2+10x1x3+18x2x3
4.4.3.
Answer 1.
2,3,6
Answer 2.
Positive definite

4.5 Diagonalization of complex matrices
4.5.2 Complex matrices

Exercise 4.5.8.

Solution.
We have A¯=[41+i23i1i57i2+3i7i4], so
AH=(A¯)T=[41i2+3i1+i57i23i7i4]=A,
and
BBH=14[1+i21i2i][1i1+i22i]=14[(1+i)(1i)+2(1+i)(1+i)2i(1i)(1i)+2i(1i)(1+i)+2]=14[4004]=[1001],
so that BH=B1.

4.5.3 Exercises

4.5.3.3.

Answer.
1,3.31662i,3.31662i

4.5.3.4.

Answer.
2,2,3.31662i,3.31662i

4.5.3.5.

Answer.
[9.48683ncos(1.24905n)9.48683nsin(1.24905n)9.48683nsin(1.24905n)9.48683ncos(1.24905n)]

4.5.3.6.

Answer.
[5.83095ncos(0.54042n)5.83095nsin(0.54042n)5.83095n(1cos(0.54042n)+2sin(0.54042n))+(1)1n5.83095nsin(0.54042n)5.83095ncos(0.54042n)5.83095n(1sin(0.54042n)2cos(0.54042n))+21n001n]

4.6 Project: linear dynamical systems

Exercise 4.6.1.

4.6.1.a
Solution.
The polynomial is given by p(x)=x2bxa.
4.6.1.b
Solution.
Computing the right-hand side, we find
Avk=[01ab][xkxk+1]=[xk+1axk+bxk+1]=[xk+1xk+2]=vk+1,
since axk+bxk+1=xk+2.
4.6.1.c
Solution.
We find
cA(x)=det[x1axb]=x2bxa.
This is the same as the polynomial found in part (a).

Exercise 4.6.2.

4.6.2.a
Solution.
We want A[xkxk+1xk+2]=[xk+1xk+2xk+3], where xk+3=axk+bxk+1+cxk+2.
This suggests that we take A=[010001abc].
4.6.2.b
Solution.
We find
cA(x)=det[x100x1abxc]=x\bvmx1bxc\evm+1\bvm01axc\evmx(x2cxb)a=x3cx2baa.
This is the same as the associated polynomial of the linear recurrence.
4.6.2.c
Solution.
By direct computation, we find
Ax=[010001abc][1λλ2]=[λλ2a+bλ+cλ2].
But λ is an eigenvalue of A, so cA(λ)=λ3cλ2bλa=0. Therefore, a+bλ+cλ2=λ3, and Ax=[λλ2λ3]=λx.

Exercise 4.6.3.

4.6.3.a
Solution.
We use the matrix from question 1(b), with a=b=1, giving A=[0111]. The characteristic polynomial is cA(x)=x2x1 (the same as the associated polynomial of the recurrence), and by the quadratic formula, cA(x)=0 if
x=(1)±124(1)(1)2(1)=1±52.
We can then check that
Ax+=[0111][1λ+]=[λ+1+λ+],
and
λ+2=14+52+54=32+52=1+λ+,
so that Ax+=[λ+λ+2]=λ+x+.
A similar calculation shows that Ax=λx.
4.6.3.b
Solution.
Letting P1v0=[a0a1], we have
\begin{bmatrix} ended with \end{align*}
4.6.3.c
Solution.
We find that λ+1.618, while λ0.618. Since |λ+|>1, the value of λ+n will grow arbitrarily large as n gets large, while the value of λn will get closer and closer to 0, since |λ|<1.

Exercise 4.6.5. Markov Chains.

4.6.5.a
Solution.
The matrix is M=[0.40.30.60.7], and the initial state is v0=[10].
4.6.5.b
Solution.
We find
Misplaced &
so the eigenvalues are λ1=1 and λ2=0.1.
An eigenvector associated to λ1 is x1=[12], and an eigenvector associated to λ2 is x2=[11]. If we let P=[1121], then we find
vn=Mnv0=[1121][100(0.1)n][1/31/32/31/3][10][1121][100(0.1)n][1/32/3][1121][1/3(0.1)n(2/3)][1/3+2/3(0.1)n2/3(0.1)n]
Based on this, we see that as n, vn approaches the vector [1/32/3]. So there is a 1 in 3 chance of finding the mosquito in swamp A, and a 2 in 3 chance of finding it in swamp B.
4.6.5.c
Solution.
We found that this vector is v=[12]. If we divide by 1+2=3, we get [1/32/\3], which is the same as before.

Exercise 4.6.6. Properties of stochastic matrices.

4.6.6.a
Solution.
Since the rows of MT sum to 1, if x=[111], we find that MTx=x. Therefore, 1 is an eigenvalue of MT, and therefore of M.
4.6.6.b
Solution.
Suppose that Mx=λx for some λ1 and some x0. Let xj be the largest entry of x. (We know that the entries of x are not all the same, since λ1.) Then we have
|λ||xj|=|λxj|=|k=1nmjkxk||k=1nmjkxj|=|1xj|.
Therefore, |λ|1.
4.6.6.c
Solution.
Let Mi=[aibicidi], where ai+ci=1 and bi+di=1, for i=1,2. Then
M1M2=[a1b1c1d1][a2b2c2d2]=[a1a2+b1c2a1b2+b1d2c1a2+d1c2c1b2+d1d2].
Now we check:
(a1a2+b1c2)+(c1a2+d1c2)=(a1+c1)a2+(b1+d1)c2=a2+c2=1,
and
(a1b2+b1d2)+(c1b2+d1d2)=(a1+c1)b2+(b1+d2)d2=b2+d1=1,
so M1M2 is stochastic.
4.6.6.d
Solution.
Suppose y is an eigenvector of MT corresponding to the eigenvalue 1, such that y is not a multiple of v. Then the entries of y cannot all be equal. Let yi be the smallest entry of yi, and note that we must have yi<yk for some k.
If MTy=y, then the i entry of this equation is
yi=j=1nmjiyj>j=inmjiyi=yij=1nmji=yi(1),
which is a contradiction. Thus, v must span the λ=1 eigenspace.
4.6.6.f
Solution.
The eigenvalues of M are ±1, and we have Mn=[1001] when M is even, while Mn=M when n is odd.
For any initial condition v0=[ab][0.50.5], we find that Mnv0 is either [ab] or [ba], depending on whether n is even or odd.

4.7 Matrix Factorizations and Eigenvalues
4.7.3 Exercises

4.7.3.2.

Answer.
8.24621,8.24621,0

4.7.3.3.

Answer 1.
[0.4285710.2857140.2857140.8571430.8571430.428571]
Answer 2.
[72107]

4.7.3.4.

Answer 1.
[0.3333330.6666670.6666670.6666670.6666670.3333330.6666670.3333330.666667]
Answer 2.
[933033006]

5 Change of Basis
5.1 The matrix of a linear transformation

Exercise 5.1.2.

Solution.
It’s clear that CB(0)=0, since the only way to write the zero vector in V in terms of B (or, indeed, any independent set) is to set all the scalars equal to zero.
If we have two vectors v,w given by
v=a1e1++anenw=b1e1++bnen,
then
v+w=(a1+b1)e1++(an+bn)en,
so
CB(v+w)=[a1+b1an+bn]=[a1an]+[b1bn]=CB(v)+CB(w).
Finally, for any scalar c, we have
CB(cv)=CB((ca1)e1++(can)en)=[ca1can]=c[a1an]=cCB(v).
This shows that CB is linear. To see that CB is an isomorphism, we can simply note that CB takes the basis B to the standard basis of Rn. Alternatively, we can give the inverse: CB1:RnV is given by
CB1[c1cn]=c1e1++cnen.

Exercises

5.1.1.
Solution.
We must first write our general input in terms of the given basis. With respect to the standard basis
B0={[1000],[0100],[0010],[0001]},
we have the matrix P=[1001011000100101], representing the change from the basis B the basis B0. The basis D of P2(R) is already the standard basis, so we need the matrix MDB(T)P1:
For a matrix X=[abcd] we find
MDB(T)P1CB0(X)=[222103811121][abcd]=[2a2b+2c+d3b8c+da+b+2cd].
But this is equal to CD(T(X)), so
T([abcd])=CD1[2a2b+2c+d3b8c+da+b+2cd]=(2a2b+2c+d)+(3b8c+d)x+(a+b+2cd)x2.
5.1.2.
Answer.
[00.750000200002]
5.1.3.
Answer.
[002601230100]
5.1.4.
Answer.
[101122043303]
5.1.5.
Answer.
[201419857]
5.1.6.
Answer.
[5504917]

5.2 The matrix of a linear operator

Exercise 5.2.4.

Solution.
With respect to the standard basis, we have
M0=MB0(T)=[324150027],
and the matrix P is given by P=[131212025]. Thus, we find
MB(T)=P1M0P=[9563671515104633].

Exercises

5.2.2.
5.2.2.a
Answer.
[491321849]
5.2.2.b
Answer.
[17122217]

5.7 Jordan Canonical Form

Exercise 5.7.7.

Solution.
With respect to the standard basis of R4, the matrix of T is
M=[1100010001201111].
We find (perhaps using the Sage cell provided below, and the code from the example above) that
cT(x)=(x1)3(x2),
so T has eigenvalues 1 (of multiplicity 3), and 2 (of multiplicity 1).
We tackle the repeated eigenvalue first. The reduced row-echelon form of MI is given by
R1=[1000010000100000],
so
E1(M)=span{x1}, where x1=[0001].
We now attempt to solve (MI)x=x1. We find
(0100000001101110|0001)RREF(1000010000100000|1000),
so x=tx1+x2, where x2=[1000]. We take x2 as our first generalized eigenvector. Note that (MI)2x2=(MI)x1=0, so x2null(MI)2, as expected.
Finally, we look for an element of null(MI)3 of the form x3, where (MI)x3=x2. We set up and solve the system (MI)x=x2 as follows:
(0100000001101110|1000)RREF(1000010000100000|0110),
so x=tx1+x3, where x3=[0110].
Finally, we deal with the eigenvalue 2. The reduced row-echelon form of M2I is
R2=[1000010000110000],
so
E2(M)=span{y}, where y=[0011].
Our basis of column vectors is therefore B={x1,x2,x3,y}. Note that by design,
Mx1=x1Mx2=x1+x2Mx3=x2+x3My=2y.
The corresponding Jordan basis for R4 is
{(0,0,0,1),(1,0,0,0),(0,1,1,0),(0,0,1,1)},
and with respect to this basis, we have
MB(T)=[1100011000100002].

Exercises

5.7.1.
Answer.
(x1)(x1)(x3)
5.7.2.
Answer.
[0111213201201111],[3100030000100001]
5.7.3.
Answer.
[1210103110220121],[2000021000200001]