# Rank-Nullity Theorem and Redundancy in a Matrix

I want to talk about some intuition behind the Rank-Nullity Theorem. In particular, let’s talk about why, even though the null space and column space of an $m\times n$ matrix live in different ambient dimensions ( $\mathbb{R}^n$ and $\mathbb{R}^m$ respectively) it still makes sense why there is a connection between them. As a reminder, here’s the Rank-Nullity Theorem.

Theorem: If an $m\times n$ matrix has a column space of dimension $k$, then it has a null space of dimension $n-k$.

The rank of a matrix tells you how many columns in the matrix are actually contributing to the dimension of the column space. For example, the matrix $A=\begin{pmatrix}1&2&3\\2&4&6\end{pmatrix}$ has a $1-$dimensional column space. Since $A$ has two rows, the column space could be at most $2$. Since the second two columns are just multiples of the first, though, neither of them provide any new information. In other words, those columns are redundant.

Now let’s examine the redundancy of the columns of $A$ by looking at its null space. First, let’s note that we can write $\begin{pmatrix}1&2&3\\2&4&6\end{pmatrix} \begin{pmatrix}1\\1\\-1\end{pmatrix}=\begin{pmatrix}0\\0\end{pmatrix}$. This also means that we can write $\begin{pmatrix}3\\6\end{pmatrix}=\begin{pmatrix}1\\2\end{pmatrix}+\begin{pmatrix}2\\4\end{pmatrix}$ since multiplying a matrix by a vector is just taking a weighted sum of the columns of the matrix. Then you can just move one vector to the other side of the equals sign.

This means that finding a nonzero vector in the null space of a matrix is equivalent to writing some of the columns of the matrix as a linear combination of the other columns. In other words, not all of these columns are needed to tell you everything you wanted to know about the matrix.

I hope this helped you see how redundancy in a matrix can be seen through a not fully spanning column space AND a non-trivial null space!