What is an intuitive meaning of the null space of a matrix? Why is it useful?

I'm not looking for textbook definitions. My textbook gives me the definition, but I just don't "get" it.

E.g.: I think of the rank $r$ of a matrix as the minimum number of dimensions that a linear combination of its columns would have; it tells me that, if I combined the vectors in its columns in some order, I'd get a set of coordinates for an $r$-dimensional space, where $r$ is minimum (please correct me if I'm wrong). So that means I can relate rank (and also dimension) to actual coordinate systems, and so it makes sense to me. But I can't think of any physical meaning for a null space... could someone explain what its meaning would be, for example, in a coordinate system?


Rodrigo de Azevedo
  • 18,977
  • 5
  • 36
  • 95
  • 12,494
  • 15
  • 48
  • 93
  • 8
    Your statement "the rank R of a matrix as the minimum number of dimensions that a linear combination of its columns would have..." should be "the rank R of a matrix as the *maximum* number of dimensions that a linear combination of its columns would have...". The rank tells you the dimension of a space spanned by the columns. – Tpofofn Feb 11 '11 at 02:01
  • 2
    @SalvadorDali, I have taken a look at your activity on MSE and it seems that it consists mostly of minor, insignificant edits. Please stop doing this, this is a kind of behaviour that we try to discourage here, on MSE. Thank you. – Alex M. Oct 08 '16 at 20:31

10 Answers10


If $A$ is your matrix, the null-space is simply put, the set of all vectors $v$ such that $A \cdot v = 0$. It's good to think of the matrix as a linear transformation; if you let $h(v) = A \cdot v$, then the null-space is again the set of all vectors that are sent to the zero vector by $h$. Think of this as the set of vectors that lose their identity as $h$ is applied to them.

Note that the null-space is equivalently the set of solutions to the homogeneous equation $A \cdot v = 0$.

Nullity is the complement to the rank of a matrix. They are both really important; here is a similar question on the rank of a matrix, you can find some nice answers why there.

  • 3,929
  • 2
  • 28
  • 37
  • 33
    Ohhhhhhhhhhhhhhhhhhhhhhhh the "loses their identity" part made so much sense! So *that* is why, when we reduce the dimensions of an `m * n` matrix, the number of vectors that *don't* lose their identity (the number of pivot columns) + the number of vectors that do (which is `dim Null A`) is just the total number of columns, `n`... thanks! It makes so much more sense now! :) – user541686 Feb 09 '11 at 07:41
  • 3
    I guess it is hard for the zero vector to loose its identity, especially under linear maps. However, is is always in the null space... – Marc van Leeuwen Dec 18 '14 at 17:59
  • 3
    In a sense it does lose its identity, as it becomes equivalent to non-zero vectors, and so cannot be distinguished from them. – milcak Jun 18 '15 at 17:24
  • 5
    I usually use the analogy of "getting squashed" by the transformation: the kernel (null-space) of a transformation are those vectors that are squashed into the other space, while the rank represents only those vectors that moved. One can also derive the fact that if you have a linear map between two vector spaces of different dimensions (domain>codomain), some *must* be squased, there just isn't enough space for them all. – Andy Dec 09 '16 at 09:00
  • In what sense do the vectors in the null space lose their identity? You can undo $f$ (meaning doing $f^{-1}$) just as well on $0$ as on any other vector, and thus you can always reverse engineer the "identity" of the vectors. – Ovi Sep 28 '19 at 03:13
  • 3
    Take the projection on first dimension $A = [1, 0]$. Both vectors $(0, 19)$ and $(0, 333)$ are $0$s in the target. How can you "undo" this zero to get the 19 and 333 back? Once mapped by the transformation, they become the same - lose their identity. – milcak Oct 12 '19 at 05:33

This is an answer I got from my own question, it's pretty awesome!

Let's suppose that the matrix A represents a physical system. As an example, let's assume our system is a rocket, and A is a matrix representing the directions we can go based on our thrusters. So what do the null space and the column space represent?

Well let's suppose we have a direction that we're interested in. Is it in our column space? If so, then we can move in that direction. The column space is the set of directions that we can achieve based on our thrusters. Let's suppose that we have three thrusters equally spaced around our rocket. If they're all perfectly functional then we can move in any direction. In this case our column space is the entire range. But what happens when a thruster breaks? Now we've only got two thrusters. Our linear system will have changed (the matrix A will be different), and our column space will be reduced.

What's the null space? The null space are the set of thruster intructions that completely waste fuel. They're the set of instructions where our thrusters will thrust, but the direction will not be changed at all.

Another example: Perhaps A can represent a rate of return on investments. The range are all the rates of return that are achievable. The null space are all the investments that can be made that wouldn't change the rate of return at all.

Another example: room illumination. The range of A represents the area of the room that can be illuminated. The null space of A represents the power we can apply to lamps that don't change the illumination in the room at all.

-- NicNic8

Justin Chan
  • 753
  • 1
  • 7
  • 9

Imagine a set of map directions at the entrance to a forest. You can apply the directions to different combinations of trails. Some trail combinations will lead you back to the entrance. They are the null space of the map directions.

  • 161
  • 1
  • 2

The rank $r$ of a matrix $A \in \mathbb{R}^{m \times n}$, as you have said is the dimension of the column space ($r$ is also the dimension of the row space as well) i.e. the dimension of the space spanned by vectors which are obtained by a linear combination of the columns of $A$, equivalently the range of $A$. (The use of the word "minimum" in the question is unnecessary). However each column vector has $m$ components and the vectors in the range of $A$ has $m$ components as such but span only a $r (\leq m)$ dimensional subspace instead of a $m$ dimensional space. So we are missing out spanning the remaining $m-r$ dimensional subspace of the $m$ dimensional space.

The left null-space now plays the roll of spanning the remaining $m-r$ dimensional subspace. This is why the left null-space is orthogonal to the column space. So the left null-space along with the column space now spans the entire $m$ dimensional space i.e. if $C = \{y \in \mathbb{R}^{m \times 1}: y = Ax\text{ for some }x \in \mathbb{R}^{n \times 1} \}$ and $Z_L = \{z \in \mathbb{R}^{m \times 1}:z^T A = 0 \}$,

then $Z_L \cup C = \mathbb{R}^{m}$ and $Z_L \perp C$

The right null-space plays the analogous roll for the rows. The rows span only a $r$ dimensional subspace of the $n$ dimensional space. The right null-space now plays the roll of spanning the remaining $n-r$ dimensional subspace. This is why the right null-space is orthogonal to the row space. So the right null-space along with the row space now spans the entire $n$ dimensional space i.e. if $R = \{y \in \mathbb{R}^{n \times 1}: y = A^Tx\text{ for some }x \in \mathbb{R}^{m \times 1} \}$ and $Z_R = \{z \in \mathbb{R}^{n \times 1}: Az = 0 \}$,

then $Z_R \cup R = \mathbb{R}^{n}$ and $Z_R \perp R$

  • 2
    This is a good mathematical explanation, but it's not really intuitive for me. (It's probably just me, not your explanation.) So far, I've tended to think of linear algebra as a tool for figuring out the number of independent variables (slash, coordinates) in an equation (or matrix), so putting it in terms of that would be more intuitive to me than just a purely mathematical definition of rows and columns. It makes me intuitively see answers without worrying about vocabulary. But +1, nice explanation anyhow. :) – user541686 Feb 09 '11 at 07:47

I find the easiest way to visualise null space is to consider a matrix mapping which represents the mapping of a vector to its shadow on $y=0$ from a fixed light source which is far away.

The null space of this mapping is a vector pointing directly towards the light source, because the vector representing its shadow in $y=0$ will be $0$. Generally, we can see from this example that for some mappings there will exist vectors which will always be mapped to $0$.

We can also see how other vectors undergo a non-invertible mapping. That is, from the shadow the best you could reconstruct the original vector would be up to an ambiguity in the direction of the null space.

(This is also good because you can do it on a table with a pen)

enter image description here

  • 141
  • 3
  • The only drawback here is that it is not a linear map. It becomes linear if we move the light source (the lamp) infinitely far away so that the rays becomes parallel, and consider the Sun instead of the lamp. The parallel rays make it very illustrative that each shadow is described by a line that is parallel to the null space, that is the null space characterizes the map in this sense. – A.Γ. Feb 23 '21 at 17:56
  • Good point - I've updated the diagram :) – BHC Feb 23 '21 at 19:01

Think of an observer and n number of speakers at different distance and in directions. Now make a matrix of equations for sound from each speaker, based on contribution of their amplitude, frequencies and phase. Null space will be formed of all possible combination that you can set in a way that, the total/superimposed sound at observer location will be zero. Means, observer will not hear anything even if the speakers are playing.


Its the solution space of the matrix equation $AX=0$ . It includes the trivial solution vector $0$. If $A$ is row equivalent to the identity matrix, then the zero vector is the only element of the solution space. If it is not i.e. when the column space of $A$ is of a dimension less than the number of columns of A then the equation $AX=0$ has non trivial solutions which form a vector space, whose dimension is termed nullity.


If your matrix is A (doesn't have to be square, can be nxm) and has rank $r< min(n,m)$, then the null space is spanned by ${max(n,m)-r}$ orthogonal vectors and is the space orthogonal to the $span$(A) $\in \mathbb{R}^{max(n,m)}$ (i.e., the linear combination of the basis/orthogonal vectors $\in \mathbb{R}^{max(n,m)}$ that are orthogonal to the $r$ basis vectors of A). See rank-nullity theorem. In the simplest example, if $A=\left[\begin{array}{cc} 1&0\\ 0&0 \end{array}\right] $, then $span(A)=\alpha\left[\begin{array}{c} 1\\ 0 \end{array}\right], \alpha \in \mathbb{R}$ and $null(A)=\beta\left[\begin{array}{c} 0\\ 1 \end{array}\right], \beta \in \mathbb{R}$

Play around with "null" in base Matlab, or SVD in Python like in this answer, where it can be seen that any zero eigenvalues correspond to eigenvectors in the left matrix that span the null space (also see here)

  • 173
  • 6

In mechanical engineering, the example of $AX=B$ is found in the finite element method, where $KU=F$, in which $K$ is the stiffness matrix, $U$ is the displacement of nodes and $F$ represents lumped forces at the same nodes. While $KU=0$ as the $U$ isn't equal to the zero vector, it means that a system like a structure can move without any forces developed in, or it won't prevent displacement defined by $U$.

Chris Tang
  • 365
  • 2
  • 13

There is another perspective to the NullSpace of matrix AX = 0

  • We can think of X vector as kind of orthogonal vector to all the vectors present in the matrix A
  • NullSpace of matrix A can be defined as the set of all possible orthogonal vectors to the matrix A