We will study the subspaces spanned by the rows of a matrix and the columns of a matrix, respectively. Next we will take a look at the relation with systems of equations.
The column space was dealt with before, but will be introduced once more in conjunction with the row space:
Consider the real -matrix .
- Each row of is of length , so the rows belong to . The subspace of spanned by the rows is called the row space of .
- Similarly, each column of belongs to . The subspace of spanned by the columns is called the column space of .
Sequences of numbers of length , even when written as a column, can be considered as elements of , and rows can be considered columns, if convenient. Of course, we will only do this when it will not result in any confusion. For example, we write: the system with , while is a column vector.
The matrix
has row space
and column space
Previously we saw that the column space is equal to the image of the linear map determined by . Since the columns of span the row space of , the row space of is equal to the image of the linear map determined by .
In general, the row space and column space of a matrix are not related. They can even be subspaces of different vector spaces. Still, their dimensions are equal, as we will see below.
In theory Basis and echelon form we have seen that the dimension of the row space is the same as the rank of the matrix. The notion of rank was introduced before as the number of rows distinct from the zero row in a row echelon form of the matrix. We will write for the rank of .
We will now show that the rank is equal to the dimension of the column space.
For every matrix the dimension of the row space is equal to the dimension of the column space. This number is equal to the rank of .
The matrix product of two general matrices and whose dimensions are such that the product is well defined, is equal to a matrix whose columns are linear combinations of the columns of and whose rows are linear combinations of the rows of . The proof of this statement is immediate from the definition of the matrix product. It is illustrated in the following example: The first column of is a linear combination of the columns of : and so is the second column of : Furthermore, the first row of is a linear combination of the rows of : and so is the second row of :
Now assume that the columns of a given -matrix span a subspace of of dimension . Let be a basis of this space. Collect these vectors as columns in an -matrix . Each of the columns of is a linear combination of these columns; these linear combinations can be summarized in one matrix product
in which is a -matrix. Now we will concentrate on the rows in this equality. The matrix equation can also be read as: every row of is a linear combination of the rows of . Since the number of rows of is equal to , the dimension of the row space cannot be greater than . Hence,
By applying this inequality to and noting that the dimension of the row space (respectively, column space) of is equal to the dimension of the column space (respectively, row space) of we also find
The two inequalities imply that the two dimensions are equal: , which is what we wanted to prove.
Let be an -matrix. In this proof we use the equality
where belongs to and belongs to . We will regard both vectors as column vectors. The standard dot product on the left hand side is defined for vectors of and the standard dot product on the right hand side for vectors of . The equality follows from the fact that, for -matrices we have , and the product rule for transposed matrices , so both sides can be written as the matrix product , in which is the row vector associated with , viewed as a -matrix.
In terms of the linear maps determined by and the equality can be written as
From this we deduce:
This can be seen in the following way
From this equality of subspaces follows an equality of the corresponding dimensions. From this we reduce the following statement:
The right-hand side of the second equality is equal to the right-hand side of the first equality on account of the Rank-nullity theorem for linear maps.
Because is the column space of and the row space, the statement follows from the last equality.
From the proofs we can deduce that the rank of an -matrix is not greater than the minimum of and . The rank can only be equal to if the matrix is a null matrix.
A nonzero -matrix has rank if and only if there is a column vector of length (that is, an -matrix) and a row vector of length (that is, a -matrix) such that . The proof can be found in the first proof of the current theorem.
For example
More general: the first proof of the theorem shows that has a rank no higher than if and only if there are an -matrix and a -matrix such that .
In Solvability of systems of linear equations we saw that elementary operations do not change the rank of a matrix, and therefore the (in)dependency of the rows of that matrix does not change either under elementary operations. Now that we know that the dimension of the row space is equal to the dimension of the column space, we see that elementary operations also do not change the (in)dependency of the columns of a matrix.
Previously we concluded that the system of equations has a solution if and only if belongs to the column space of , and that there is only one solution if the columns are independent. Thanks to the statement above we can conclude the following.
Let be an -matrix. The system of equations has no more than one solution for each vector in if and only if the rank of is equal to . In that case, we have .
If does not lie in , there are no solutions. If does lie in , then each solution consists of the coordinates of with respect to the basis of consisting of the columns of . These coordinates are unique since the columns form a basis.
In case the rank of is equal to , the inequality follows as if , the matrix would have rank less than or equal to , hence less than , a contradiction.
Determining the rank of a matrix is straightforward: we row reduce the matrix to an echelon form and count the number of rows distinct from the zero row. The statement above shows that we can also determine the rank by column reduction.
Determine the rank of the matrix
With the aid of elementary row operations we reduce the matrix to the reduced echelon form: Because the rank is the number of non-null rows of this matrix, we conclude that the rank of the matrix equals .
In the given solution, we have reduced the matrix to the reduced echelon form, although it is sufficient to reduce to any echelon form.