I want to talk about some intuition behind the Rank-Nullity Theorem. In particular, let’s talk about why, even though the null space and column space of an matrix live in different ambient dimensions (
and
respectively) it still makes sense why there is a connection between them. As a reminder, here’s the Rank-Nullity Theorem.
Theorem: If an matrix has a column space of dimension
, then it has a null space of dimension
.
The rank of a matrix tells you how many columns in the matrix are actually contributing to the dimension of the column space. For example, the matrix has a
dimensional column space. Since
has two rows, the column space could be at most
. Since the second two columns are just multiples of the first, though, neither of them provide any new information. In other words, those columns are redundant.
Now let’s examine the redundancy of the columns of by looking at its null space. First, let’s note that we can write
. This also means that we can write
since multiplying a matrix by a vector is just taking a weighted sum of the columns of the matrix. Then you can just move one vector to the other side of the equals sign.
This means that finding a nonzero vector in the null space of a matrix is equivalent to writing some of the columns of the matrix as a linear combination of the other columns. In other words, not all of these columns are needed to tell you everything you wanted to know about the matrix.
I hope this helped you see how redundancy in a matrix can be seen through a not fully spanning column space AND a non-trivial null space!