Skip to main content
Logo image

Section 6.2 Orthogonal complements and the matrix transpose

We’ve now seen how the dot product enables us to determine the angle between two vectors and, more specifically, when two vectors are orthogonal. Moving forward, we will explore how the orthogonality condition simplifies many common tasks, such as expressing a vector as a linear combination of a given set of vectors.
This section introduces the notion of an orthogonal complement, the set of vectors each of which is orthogonal to a prescribed subspace. We’ll also find a way to describe dot products using matrix products, which allows us to study orthogonality using many of the tools for understanding linear systems that we developed earlier.

Preview Activity 6.2.1.

  1. Sketch the vector \(\vvec=\twovec{-1}2\) on Figure 6.2.1 and one vector that is orthogonal to it.
    Figure 6.2.1. Sketch the vector \(\vvec\) and one vector orthogonal to it.
  2. If a vector \(\xvec\) is orthogonal to \(\vvec\text{,}\) what do we know about the dot product \(\vvec\cdot\xvec\text{?}\)
  3. If we write \(\xvec=\twovec xy\text{,}\) use the dot product to write an equation for the vectors orthogonal to \(\vvec\) in terms of \(x\) and \(y\text{.}\)
  4. Use this equation to sketch the set of all vectors orthogonal to \(\vvec\) in Figure 6.2.1.
  5. Section 3.5 introduced the column space \(\col(A)\) and null space \(\nul(A)\) of a matrix \(A\text{.}\) If \(A\) is a matrix, what is the meaning of the null space \(\nul(A)\text{?}\)
  6. What is the meaning of the column space \(\col(A)\text{?}\)

Subsection 6.2.1 Orthogonal complements

The preview activity presented us with a vector \(\vvec\) and led us through the process of describing all the vectors orthogonal to \(\vvec\text{.}\) Notice that the set of scalar multiples of \(\vvec\) describes a line \(L\text{,}\) a 1-dimensional subspace of \(\real^2\text{.}\) We then described a second line consisting of all the vectors orthogonal to \(\vvec\text{.}\) Notice that every vector on this line is orthogonal to every vector on the line \(L\text{.}\) We call this new line the orthogonal complement of \(L\) and denote it by \(L^\perp\text{.}\) The lines \(L\) and \(L^\perp\) are illustrated on the left of Figure 6.2.2.
Figure 6.2.2. On the left is a line \(L\) and its orthogonal complement \(L^\perp\text{.}\) On the right is a plane \(W\) and its orthogonal complement \(W^\perp\) in \(\real^3\text{.}\)
The next definition places this example into a more general context.

Definition 6.2.3.

Given a subspace \(W\) of \(\real^m\text{,}\) the orthogonal complement of \(W\) is the set of vectors in \(\real^m\) each of which is orthogonal to every vector in \(W\text{.}\) We denote the orthogonal complement by \(W^\perp\text{.}\)
A typical example appears on the right of Figure 6.2.2. Here we see a plane \(W\text{,}\) a two-dimensional subspace of \(\real^3\text{,}\) and its orthogonal complement \(W^\perp\text{,}\) which is a line in \(\real^3\text{.}\)
As the next activity demonstrates, the orthogonal complement of a subspace \(W\) is itself a subspace of \(\real^m\text{.}\)

Activity 6.2.2.

Suppose that \(\wvec_1=\threevec10{-2}\) and \(\wvec_2=\threevec11{-1}\) form a basis for \(W\text{,}\) a two-dimensional subspace of \(\real^3\text{.}\) We will find a description of the orthogonal complement \(W^\perp\text{.}\)
  1. Suppose that the vector \(\xvec\) is orthogonal to \(\wvec_1\text{.}\) If we write \(\xvec=\threevec{x_1}{x_2}{x_3}\text{,}\) use the fact that \(\wvec_1\cdot\xvec = 0\) to write a linear equation for \(x_1\text{,}\) \(x_2\text{,}\) and \(x_3\text{.}\)
  2. Suppose that \(\xvec\) is also orthogonal to \(\wvec_2\text{.}\) In the same way, write a linear equation for \(x_1\text{,}\) \(x_2\text{,}\) and \(x_3\) that arises from the fact that \(\wvec_2\cdot\xvec = 0\text{.}\)
  3. If \(\xvec\) is orthogonal to both \(\wvec_1\) and \(\wvec_2\text{,}\) these two equations give us a linear system \(B\xvec=\zerovec\) for some matrix \(B\text{.}\) Identify the matrix \(B\) and write a parametric description of the solution space to the equation \(B\xvec = \zerovec\text{.}\)
  4. Since \(\wvec_1\) and \(\wvec_2\) form a basis for the two-dimensional subspace \(W\text{,}\) any vector \(\wvec\) in \(W\) can be written as a linear combination
    \begin{equation*} \wvec = c_1\wvec_1 + c_2\wvec_2\text{.} \end{equation*}
    If \(\xvec\) is orthogonal to both \(\wvec_1\) and \(\wvec_2\text{,}\) use the distributive property of dot products to explain why \(\xvec\) is orthogonal to \(\wvec\text{.}\)
  5. Give a basis for the orthogonal complement \(W^\perp\) and state the dimension \(\dim W^\perp\text{.}\)
  6. Describe \((W^\perp)^\perp\text{,}\) the orthogonal complement of \(W^\perp\text{.}\)

Example 6.2.4.

If \(L\) is the line defined by \(\vvec=\threevec1{-2}3\) in \(\real^3\text{,}\) we will describe the orthogonal complement \(L^\perp\text{,}\) the set of vectors orthogonal to \(L\text{.}\)
If \(\xvec\) is orthogonal to \(L\text{,}\) it must be orthogonal to \(\vvec\) so we have
\begin{equation*} \vvec\cdot\xvec = x_1-2x_2+3x_3 = 0\text{.} \end{equation*}
We can describe the solutions to this equation parametrically as
\begin{equation*} \xvec=\threevec{x_1}{x_2}{x_3} = \cthreevec{2x_2-3x_3}{x_2}{x_3} = x_2\threevec210+x_3\threevec{-3}01\text{.} \end{equation*}
Therefore, the orthogonal complement \(L^\perp\) is a plane, a two-dimensional subspace of \(\real^3\text{,}\) spanned by the vectors \(\threevec210\) and \(\threevec{-3}01\text{.}\)

Example 6.2.5.

Suppose that \(W\) is the \(2\)-dimensional subspace of \(\real^5\) with basis
\begin{equation*} \wvec_1=\fivevec{-1}{-2}23{-4},\hspace{24pt} \wvec_2=\fivevec24202\text{.} \end{equation*}
We will give a description of the orthogonal complement \(W^\perp\text{.}\)
If \(\xvec\) is in \(W^\perp\text{,}\) we know that \(\xvec\) is orthogonal to both \(\wvec_1\) and \(\wvec_2\text{.}\) Therefore,
\begin{align*} \wvec_1\cdot\xvec \amp {}={}-x_1-2x_2+2x_3+3x_4-4x_5 \amp {}={} 0\\ \wvec_2\cdot\xvec \amp {}={} 2x_1+4x_2+2x_3+0x_4+2x_5 \amp {}={} 0 \end{align*}
In other words, \(B\xvec=\zerovec\) where
\begin{equation*} B = \begin{bmatrix} -1 \amp -2 \amp 2 \amp 3 \amp -4 \\ 2 \amp 4 \amp 2 \amp 0 \amp 2 \end{bmatrix} \sim \begin{bmatrix} 1 \amp 2 \amp 0 \amp -1 \amp 2 \\ 0 \amp 0 \amp 1 \amp 1 \amp -1 \end{bmatrix}\text{.} \end{equation*}
The solutions may be described parametrically as
\begin{equation*} \xvec=\fivevec{x_1}{x_2}{x_3}{x_4}{x_5} =x_2\fivevec{-2}1000 + x_4\fivevec10{-1}10 + x_5\fivevec{-2}0101\text{.} \end{equation*}
The distributive property of dot products implies that any vector that is orthogonal to both \(\wvec_1\) and \(\wvec_2\) is also orthogonal to any linear combination of \(\wvec_1\) and \(\wvec_2\) since
\begin{equation*} (c_1\wvec_1 + c_2\wvec_2)\cdot\xvec = c_1\wvec_1\cdot\xvec + c_2\wvec_2\cdot\xvec = 0\text{.} \end{equation*}
Therefore, \(W^\perp\) is a \(3\)-dimensional subspace of \(\real^5\) with basis
\begin{equation*} \vvec_1=\fivevec{-2}1000, \hspace{24pt} \vvec_2=\fivevec10{-1}10, \hspace{24pt} \vvec_3=\fivevec{-2}0101\text{.} \end{equation*}
One may check that the vectors \(\vvec_1\text{,}\) \(\vvec_2\text{,}\) and \(\vvec_3\) are each orthogonal to both \(\wvec_1\) and \(\wvec_2\text{.}\)

Subsection 6.2.2 The matrix transpose

The previous activity and examples show how we can describe the orthogonal complement of a subspace as the solution set of a particular linear system. We will make this connection more explicit by defining a new matrix operation called the transpose.

Definition 6.2.6.

The transpose of the \(m\times n\) matrix \(A\) is the \(n\times m\) matrix \(A^T\) whose rows are the columns of \(A\text{.}\)

Example 6.2.7.

If \(A=\begin{bmatrix} 4 \amp -3 \amp 0 \amp 5 \\ -1 \amp 2 \amp 1 \amp 3 \\ \end{bmatrix} \text{,}\) then \(A^T=\begin{bmatrix} 4 \amp -1 \\ -3 \amp 2 \\ 0 \amp 1 \\ 5 \amp 3 \\ \end{bmatrix}\)

Activity 6.2.3.

This activity illustrates how multiplying a vector by \(A^T\) is related to computing dot products with the columns of \(A\text{.}\) You’ll develop a better understanding of this relationship if you compute the dot products and matrix products in this activity without using technology.
  1. If \(B = \begin{bmatrix} 3 \amp 4 \\ -1 \amp 2 \\ 0 \amp -2 \\ \end{bmatrix} \text{,}\) write the matrix \(B^T\text{.}\)
  2. Suppose that
    \begin{equation*} \vvec_1=\threevec20{-2},\hspace{24pt} \vvec_2=\threevec112,\hspace{24pt} \wvec=\threevec{-2}23\text{.} \end{equation*}
    Find the dot products \(\vvec_1\cdot\wvec\) and \(\vvec_2\cdot\wvec\text{.}\)
  3. Now write the matrix \(A = \begin{bmatrix} \vvec_1 \amp \vvec_2 \end{bmatrix}\) and its transpose \(A^T\text{.}\) Find the product \(A^T\wvec\) and describe how this product computes both dot products \(\vvec_1\cdot\wvec\) and \(\vvec_2\cdot\wvec\text{.}\)
  4. Suppose that \(\xvec\) is a vector that is orthogonal to both \(\vvec_1\) and \(\vvec_2\text{.}\) What does this say about the dot products \(\vvec_1\cdot\xvec\) and \(\vvec_2\cdot\xvec\text{?}\) What does this say about the product \(A^T\xvec\text{?}\)
  5. Use the matrix \(A^T\) to give a parametric description of all the vectors \(\xvec\) that are orthogonal to \(\vvec_1\) and \(\vvec_2\text{.}\)
  6. Remember that \(\nul(A^T)\text{,}\) the null space of \(A^T\text{,}\) is the solution set of the equation \(A^T\xvec=\zerovec\text{.}\) If \(\xvec\) is a vector in \(\nul(A^T)\text{,}\) explain why \(\xvec\) must be orthogonal to both \(\vvec_1\) and \(\vvec_2\text{.}\)
  7. Remember that \(\col(A)\text{,}\) the column space of \(A\text{,}\) is the set of linear combinations of the columns of \(A\text{.}\) Therefore, any vector in \(\col(A)\) can be written as \(c_1\vvec_1+c_2\vvec_2\text{.}\) If \(\xvec\) is a vector in \(\nul(A^T)\text{,}\) explain why \(\xvec\) is orthogonal to every vector in \(\col(A)\text{.}\)
The previous activity demonstrates an important connection between the matrix transpose and dot products. More specifically, the components of the product \(A^T\xvec\) are simply the dot products of the columns of \(A\) with \(\xvec\text{.}\) We will make frequent use of this observation so let’s record it as a proposition.

Example 6.2.9.

Suppose that \(W\) is a subspace of \(\real^4\) having basis
\begin{equation*} \wvec_1=\fourvec1021,\hspace{24pt} \wvec_2=\fourvec2134\text{,} \end{equation*}
and that we wish to describe the orthogonal complement \(W^\perp\text{.}\)
If \(A\) is the matrix \(A = \begin{bmatrix}\wvec_1 \amp \wvec_2\end{bmatrix}\) and \(\xvec\) is in \(W^\perp\text{,}\) we have
\begin{equation*} A^T\xvec = \twovec{\wvec_1\cdot\xvec}{\wvec_2\cdot\xvec} = \twovec00\text{.} \end{equation*}
Describing vectors \(\xvec\) that are orthogonal to both \(\wvec_1\) and \(\wvec_2\) is therefore equivalent to the more familiar task of describing the solution set \(A^T\xvec = \zerovec\text{.}\) To do so, we find the reduced row echelon form of \(A^T\) and write the solution set parametrically as
\begin{equation*} \xvec = x_3\fourvec{-2}{1}10 + x_4\fourvec{-1}{-2}01\text{.} \end{equation*}
Once again, the distributive property of dot products tells us that such a vector is also orthogonal to any linear combination of \(\wvec_1\) and \(\wvec_2\) so this solution set is, in fact, the orthogonal complement \(W^\perp\text{.}\) Indeed, we see that the vectors
\begin{equation*} \vvec_1=\fourvec{-2}110,\hspace{24pt} \vvec_2=\fourvec{-1}{-2}01 \end{equation*}
form a basis for \(W^\perp\text{,}\) which is a two-dimensional subspace of \(\real^4\text{.}\)
To place this example in a slightly more general context, note that \(\wvec_1\) and \(\wvec_2\text{,}\) the columns of \(A\text{,}\) form a basis of \(W\text{.}\) Since \(\col(A)\text{,}\) the column space of \(A\) is the subspace of linear combinations of the columns of \(A\text{,}\) we have \(W=\col(A)\text{.}\)
This example also shows that the orthogonal complement \(W^\perp = \col(A)^\perp\) is described by the solution set of \(A^T\xvec = \zerovec\text{.}\) This solution set is what we have called \(\nul(A^T)\text{,}\) the null space of \(A^T\text{.}\) In this way, we see the following proposition, which is visually represented in Figure 6.2.11.
Figure 6.2.11. The orthogonal complement of the column space of \(A\) is the null space of \(A^T\text{.}\)

Subsection 6.2.3 Properties of the matrix transpose

The transpose is a simple algebraic operation performed on a matrix. The next activity explores some of its properties.

Activity 6.2.4.

In Sage, the transpose of a matrix A is given by A.T. Define the matrices
\begin{equation*} A = \begin{bmatrix} 1 \amp 0 \amp -3 \\ 2 \amp -2 \amp 1 \\ \end{bmatrix}, \hspace{6pt} B = \begin{bmatrix} 3 \amp -4 \amp 1 \\ 0 \amp 1 \amp 2 \\ \end{bmatrix}, \hspace{6pt} C= \begin{bmatrix} 1 \amp 0 \amp -3 \\ 2 \amp -2 \amp 1 \\ 3 \amp 2 \amp 0 \\ \end{bmatrix}\text{.} \end{equation*}
  1. Evaluate \((A+B)^T\) and \(A^T+B^T\text{.}\) What do you notice about the relationship between these two matrices?
  2. What happens if you transpose a matrix twice; that is, what is \((A^T)^T\text{?}\)
  3. Find \(\det(C)\) and \(\det(C^T)\text{.}\) What do you notice about the relationship between these determinants?
    1. Find the product \(AC\) and its transpose \((AC)^T\text{.}\)
    2. Is it possible to compute the product \(A^TC^T\text{?}\) Explain why or why not.
    3. Find the product \(C^TA^T\) and compare it to \((AC)^T\text{.}\) What do you notice about the relationship between these two matrices?
  4. What is the transpose of the identity matrix \(I\text{?}\)
  5. If a square matrix \(D\) is invertible, explain why you can guarantee that \(D^T\) is invertible and why \((D^T)^{-1} = (D^{-1})^T\text{.}\)
In spite of the fact that we are looking at some specific examples, this activity demonstrates the following general properties of the transpose, which may be verified with a little effort.

Properties of the transpose.

Here are some properties of the matrix transpose, expressed in terms of general matrices \(A\text{,}\) \(B\text{,}\) and \(C\text{.}\) We assume that \(C\) is a square matrix.
  • If \(A+B\) is defined, then \((A+B)^T = A^T+B^T\text{.}\)
  • \((sA)^T = sA^T\text{.}\)
  • \((A^T)^T = A\text{.}\)
  • \(\det(C) = \det(C^T)\text{.}\)
  • If \(AB\) is defined, then \((AB)^T = B^TA^T\text{.}\) Notice that the order of the multiplication is reversed.
  • If \(C\) is invertible, then \((C^T)^{-1} = (C^{-1})^T\text{.}\)
There is one final property we wish to record though we will wait until Section 7.4 to explain why it is true.
This proposition is important because it implies a relationship between the dimensions of a subspace and its orthogonal complement. For instance, if \(A\) is an \(m\times n\) matrix, we saw in Section 3.5 that \(\dim\col(A) = \rank(A)\) and \(\dim\nul(A) = n-\rank(A)\text{.}\)
Now suppose that \(W\) is an \(n\)-dimensional subspace of \(\real^m\) with basis \(\wvec_1,\wvec_2,\ldots,\wvec_n\text{.}\) If we form the \(m\times n\) matrix \(A=\begin{bmatrix}\wvec_1 \amp \wvec_2 \amp \ldots \amp \wvec_n\end{bmatrix}\text{,}\) then \(\col(A) = W\) so that
\begin{equation*} \rank(A) = \dim\col(A) = \dim W = n\text{.} \end{equation*}
The transpose \(A^T\) is an \(n\times m\) matrix having \(\rank(A^T) = \rank(A)= n\text{.}\) Since \(W^\perp = \nul(A^T)\text{,}\) we have
\begin{equation*} \dim W^\perp = \dim\nul(A^T) = m - \rank(A^T) = m-n = m-\dim W\text{.} \end{equation*}
This explains the following proposition.

Example 6.2.14.

In Example 6.2.4, we constructed the orthogonal complement of a line in \(\real^3\text{.}\) The dimension of the orthogonal complement should be \(3 - 1 = 2\text{,}\) which explains why we found the orthogonal complement to be a plane.

Example 6.2.15.

In Example 6.2.5, we looked at \(W\text{,}\) a \(2\)-dimensional subspace of \(\real^5\) and found its orthogonal complement \(W^\perp\) to be a \(5-2=3\)-dimensional subspace of \(\real^5\text{.}\)

Activity 6.2.5.

  1. Suppose that \(W\) is a \(5\)-dimensional subspace of \(\real^9\) and that \(A\) is a matrix whose columns form a basis for \(W\text{;}\) that is, \(\col(A) = W\text{.}\)
    1. What is the shape of \(A\text{?}\)
    2. What is the rank of \(A\text{?}\)
    3. What is the shape of \(A^T\text{?}\)
    4. What is the rank of \(A^T\text{?}\)
    5. What is \(\dim\nul(A^T)\text{?}\)
    6. What is \(\dim W^\perp\text{?}\)
    7. How are the dimensions of \(W\) and \(W^\perp\) related?
  2. Suppose that \(W\) is a subspace of \(\real^4\) having basis
    \begin{equation*} \wvec_1 = \fourvec102{-1},\hspace{24pt} \wvec_2 = \fourvec{-1}2{-6}3. \end{equation*}
    1. Find the dimensions \(\dim W\) and \(\dim W^\perp\text{.}\)
    2. Find a basis for \(W^\perp\text{.}\) It may be helpful to know that the Sage command A.right_kernel() produces a basis for \(\nul(A)\text{.}\)
    3. Verify that each of the basis vectors you found for \(W^\perp\) are orthogonal to the basis vectors for \(W\text{.}\)

Subsection 6.2.4 Summary

This section introduced the matrix transpose, its connection to dot products, and its use in describing the orthogonal complement of a subspace.
  • The columns of the matrix \(A\) are the rows of the matrix transpose \(A^T\text{.}\)
  • The components of the product \(A^T\xvec\) are the dot products of \(\xvec\) with the columns of \(A\text{.}\)
  • The orthogonal complement of the column space of \(A\) equals the null space of \(A^T\text{;}\) that is, \(\col(A)^\perp = \nul(A^T)\text{.}\)
  • If \(W\) is a subspace of \(\real^p\text{,}\) then
    \begin{equation*} \dim W + \dim W^\perp = p. \end{equation*}

Exercises 6.2.5 Exercises

1.

Suppose that \(W\) is a subspace of \(\real^4\) with basis
\begin{equation*} \wvec_1=\fourvec{-2}22{-4},\hspace{24pt} \wvec_2=\fourvec{-2}35{-5}. \end{equation*}
  1. What are the dimensions \(\dim W\) and \(\dim W^\perp\text{?}\)
  2. Find a basis for \(W^\perp\text{.}\)
  3. Verify that each of the basis vectors for \(W^\perp\) are orthogonal to \(\wvec_1\) and \(\wvec_2\text{.}\)

2.

Consider the matrix \(A = \begin{bmatrix} -1 \amp -2 \amp -2 \\ 1 \amp 3 \amp 4 \\ 2 \amp 1 \amp -2 \end{bmatrix}\text{.}\)
  1. Find \(\rank(A)\) and a basis for \(\col(A)\text{.}\)
  2. Determine the dimension of \(\col(A)^\perp\) and find a basis for it.

3.

Suppose that \(W\) is the subspace of \(\real^4\) defined as the solution set of the equation
\begin{equation*} x_1 - 3x_2 + 5x_3 - 2x_4 = 0. \end{equation*}
  1. What are the dimensions \(\dim W\) and \(\dim W^\perp\text{?}\)
  2. Find a basis for \(W\text{.}\)
  3. Find a basis for \(W^\perp\text{.}\)
  4. In general, how can you easily find a basis for \(W^\perp\) when \(W\) is defined by
    \begin{equation*} Ax_1+Bx_2+Cx_3+Dx_4 = 0? \end{equation*}

4.

Determine whether the following statements are true or false and explain your reasoning.
  1. If \(A=\begin{bmatrix} 2 \amp 1 \\ 1 \amp 1 \\ -3 \amp 1 \end{bmatrix}\text{,}\) then \(\xvec=\threevec{4}{-5}1\) is in \(\col(A)^\perp\text{.}\)
  2. If \(A\) is a \(2\times3\) matrix and \(B\) is a \(3\times4\) matrix, then \((AB)^T = A^TB^T\) is a \(4\times2\) matrix.
  3. If the columns of \(A\) are \(\vvec_1\text{,}\) \(\vvec_2\text{,}\) and \(\vvec_3\) and \(A^T\xvec = \threevec{2}01\text{,}\) then \(\xvec\) is orthogonal to \(\vvec_2\text{.}\)
  4. If \(A\) is a \(4\times 4\) matrix with \(\rank(A) = 3\text{,}\) then \(\col(A)^\perp\) is a line in \(\real^4\text{.}\)
  5. If \(A\) is a \(5\times 7\) matrix with \(\rank(A) = 5\text{,}\) then \(\rank(A^T) = 7\text{.}\)

5.

Apply properties of matrix operations to simplify the following expressions.
  1. \(\displaystyle A^T(BA^T)^{-1} \)
  2. \(\displaystyle (A+B)^T(A+B) \)
  3. \(\displaystyle [A(A+B)^T]^T \)
  4. \(\displaystyle (A + 2I)^T \)

6.

A symmetric matrix \(A\) is one for which \(A=A^T\text{.}\)
  1. Explain why a symmetric matrix must be square.
  2. If \(A\) and \(B\) are general matrices and \(D\) is a square diagonal matrix, which of the following matrices can you guarantee are symmetric?
    1. \(\displaystyle D\)
    2. \(\displaystyle BAB^{-1} \)
    3. \(AA^T\text{.}\)
    4. \(\displaystyle BDB^T\)

7.

If \(A\) is a square matrix, remember that the characteristic polynomial of \(A\) is \(\det(A-\lambda I)\) and that the roots of the characteristic polynomial are the eigenvalues of \(A\text{.}\)
  1. Explain why \(A\) and \(A^T\) have the same characteristic polynomial.
  2. Explain why \(A\) and \(A^T\) have the same set of eigenvalues.
  3. Suppose that \(A\) is diagonalizable with diagonalization \(A=PDP^{-1}\text{.}\) Explain why \(A^T\) is diagonalizable and find a diagonalization.

8.

This exercise introduces a version of the Pythagorean theorem that we’ll use later.
  1. Suppose that \(\vvec\) and \(\wvec\) are orthogonal to one another. Use the dot product to explain why
    \begin{equation*} \len{\vvec+\wvec}^2 = \len{\vvec}^2 + \len{\wvec}^2. \end{equation*}
  2. Suppose that \(W\) is a subspace of \(\real^m\) and that \(\zvec\) is a vector in \(\real^m\) for which
    \begin{equation*} \zvec = \xvec + \yvec, \end{equation*}
    where \(\xvec\) is in \(W\) and \(\yvec\) is in \(W^\perp\text{.}\) Explain why
    \begin{equation*} \len{\zvec}^2 = \len{\xvec}^2 + \len{\yvec}^2, \end{equation*}
    which is an expression of the Pythagorean theorem.

9.

In the next chapter, symmetric matrices---that is, matrices for which \(A=A^T\)---play an important role. It turns out that eigenvectors of a symmetric matrix that are associated to different eigenvalues are orthogonal. We will explain this fact in this exercise.
  1. Viewing a vector as a matrix having one column, we may write \(\xvec\cdot\yvec = \xvec^T\yvec\text{.}\) If \(A\) is a matrix, explain why \(\xvec\cdot (A\yvec) = (A^T\xvec) \cdot \yvec\text{.}\)
  2. We have seen that the matrix \(A=\begin{bmatrix} 1 \amp 2 \\ 2 \amp 1 \end{bmatrix}\) has eigenvectors \(\vvec_1=\twovec11\text{,}\) with associated eigenvalue \(\lambda_1=3\text{,}\) and \(\vvec_2 = \twovec{1}{-1}\text{,}\) with associated eigenvalue \(\lambda_2 = -1\text{.}\) Verify that \(A\) is symmetric and that \(\vvec_1\) and \(\vvec_2\) are orthogonal.
  3. Suppose that \(A\) is a general symmetric matrix and that \(\vvec_1\) is an eigenvector associated to eigenvalue \(\lambda_1\) and that \(\vvec_2\) is an eigenvector associated to a different eigenvalue \(\lambda_2\text{.}\) Beginning with \(\vvec_1\cdot (A\vvec_2)\text{,}\) apply the identity from the first part of this exercise to explain why \(\vvec_1\) and \(\vvec_2\) are orthogonal.

10.

Given an \(m\times n\) matrix \(A\text{,}\) the row space of \(A\) is the column space of \(A^T\text{;}\) that is, \(\row(A) = \col(A^T)\text{.}\)
  1. Suppose that \(A\) is a \(7\times 15\) matrix. For what \(p\) is \(\row(A)\) a subspace of \(\real^p\text{?}\)
  2. How can Proposition 6.2.10 help us describe \(\row(A)^\perp\text{?}\)
  3. Suppose that \(A = \begin{bmatrix} -1 \amp -2 \amp 2 \amp 1 \\ 2 \amp 4 \amp -1 \amp 5 \\ 1 \amp 2 \amp 0 \amp 3 \end{bmatrix}\text{.}\) Find bases for \(\row(A)\) and \(\row(A)^\perp\text{.}\)
You have attempted of activities on this page.