I was in a seminar today and the lecturer said that the gaussian distribution is isotropic. What does it mean for a distribution to be isotropic? It seems like he is using this property for the pseudo-independence of vectors where each entry is sampled from the normal distribution.

  • 22,738
  • 12
  • 43
  • 107
  • 707
  • 1
  • 5
  • 7
  • 4
    In general, a multivariate normal distribution can be anisotropic depending on the covariance matrix. There has. clearly been some miscommunication somewhere along the way. – Brian Borchers Oct 30 '16 at 18:52

4 Answers4


TLDR: An isotropic gaussian is one where the covariance matrix is represented by the simplified matrix $\Sigma = \sigma^{2}I$.

Some motivations:

Consider the traditional gaussian distribution:

$$ \mathcal{N}(\mu,\,\Sigma) $$

where $\mu$ is the mean and $\Sigma$ is the covariance matrix.

Consider how the number of free parameters in this Gaussian grows as the number of dimensions grows.

$\mu$ will have a linear growth. $\Sigma$ will have a quadratic growth!

This quadratic growth can be very computationally expensive, so $\Sigma$ is often restricted as $\Sigma = \sigma^{2}I$ where $\sigma^{2}I$ is a scalar variance multiplied by an identity matrix.

Note that this results in $\Sigma$ where all dimensions are independent and where the variance of each dimension is the same. So the gaussian will be circular/spherical.

Disclaimer: Not a mathematician, and I only just learned about this so may be missing some things :)

Hope that helps!

  • 956
  • 7
  • 5
  • 4
    The variables of the multivariate Gaussian may not be independent, even if they have zero covariance. Covariance is a measure of only linear association. Independence implies zero covariance but not vice versa. See this [example](https://stats.stackexchange.com/questions/12842/covariance-and-independence). – mloning Nov 29 '19 at 09:10
  • 4
    @mloning, your comment is a bit misleading here. When a random vector has as a multivariate normal, 0 covariance does indeed imply independence. All of this gets its own wikipedia page: https://en.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent The notation above suggests to me multivariate normality, so, it's fine. Another way in which the normal distribution is magical – RMurphy Apr 24 '20 at 19:56
  • 1
    Yes, wasn't aware of that, thanks for the clarification! – mloning Apr 25 '20 at 15:00
  • 1
    Another motivation: The conjugate prior of a multivariate normal distribution with unit variance is isotropic normal. Like you say, there's no need for all those parameters! – Neil G Dec 23 '20 at 06:19
  • Easy to understand for beginners. As I am not a native English speaker, I was confused by the word `isotropic`. – GoingMyWay Mar 09 '21 at 09:18

I'd just like to add a bit of visuals to the other answers.

When the variables are independent, i.e. the distrubtion is isotropic, it means that the distribution is aligned with the axis.

For example, for $\Sigma = \begin{pmatrix}1 & 0 \\ 0 & 30\end{pmatrix}$, you'd get something like this:

image of 2D gaussian with higher Y variance

So, what happens when it is not isotropic? For example, when $\Sigma = \begin{pmatrix}1 & 15 \\ 15 & 30\end{pmatrix}$, the distribution appears "rotated", no longer aligned with the axes:

image of 2D gaussian with covariance between X and Y

Note that this is just an example, the $\Sigma$ above is invalid since it is not PSD.


import numpy as np
from matplotlib import pyplot as plt

pts = np.random.multivariate_normal([0, 0], [[1,15],[15,31]], size=10000, check_valid='warn')

plt.scatter(pts[:, 0], pts[:, 1], s=1)
  • 3
    This seems to contradict the other answers, the matrix [1 0;0 30] is a diagonal matrix, but it can't be written as $\sigma I$ since not all diagonal entries are the same. I'm not sure which one it should be – Alex Li Sep 27 '20 at 20:34
  • 1
    Isotropic means the same in all directions which the distribution in this example clearly is not. Aligned with the axes is not the same as isotropic in general, in fact it is only the same when the covariance matrix can be written as $\sigma I$ as other commenters have noted. – Epimetheus Sep 19 '21 at 20:00

I am not a math major student but I will give a try to describe my understanding: an isotropic gaussian distribution means a multidimensional gaussian distribution with its variance matrix as an identity matrix multiplied by the same number on its diagonal. Each dimension can be seen as an independent one-dimension gaussian distribution (no covariance exists).

  • 191
  • 4

Thanks to Tomoiagă, valuable learning opportunity. Explanation for psd in his answer.

w.r.t material in course Probabilistic Deep Learning with TensorFlow 2 in coursera.

the definition for positive semi-definite:

A symmetric matrix $M \in \mathbb{R}^{d\times d}$ is positive semi-definite if it satisfies $b^TMb \ge 0$ for all nonzero $b\in\mathbb{R}^d$. If, in addition, we have $b^TMb = 0 \Rightarrow > b=0$ then $M$ is positive definite.

In one word: the valid covariance matrix should be symmetry and positive (semi-)definite. However, how to check the $\Sigma$ satisfied with the requirement?

Here comes The Cholesky decomposition

For every real-valued symmetric positive-definite matrix $M$, there is a unique lower-diagonal matrix $L$ that has positive diagonal entries for which

\begin{equation} LL^T = M \end{equation} This is called the Cholesky decomposition of $M$

Let's build some codes.

Given a psd matrix $ \Sigma = \begin{bmatrix} 10 & 5 \\ 5 & 10 \end{bmatrix}$ and a non-psd $ \Sigma = \begin{bmatrix} 10 & 11 \\ 11 & 10 \end{bmatrix}$ as the covariance matrix

sigma = [[10., 5.],[5., 10.]]


<tf.Tensor: shape=(2, 2), dtype=float32, numpy= array([[3.1622777, 0.  ],
       [1.5811388, 2.738613 ]], dtype=float32)>
bad_sigma = [[10., 11.], [11., 10.]]
    scale_tril = tf.linalg.cholesky(bad_sigma)
except Exception as e:


Cholesky decomposition was not successful. The input might not be valid.

For convenience, a lower-triangular matrix is easier to create.

Last, the demo with isotropic Gaussian and non-isotropic ones.

import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline

## isotropic normal
sigma = [[1., 0.],[0., 1.]]
lower_triangular = np.linalg.cholesky(sigma)
sigma = np.matmul(lower_triangular, np.transpose(lower_triangular))

bivariate_normal = np.random.multivariate_normal([0, 0], sigma, size=10000, check_valid='warn')

x1 = bivariate_normal[:, 0]
x2 = bivariate_normal[:, 1]
sns.jointplot(x1, x2, kind='kde', space=0, color='b')


# #non-isotropic normal
# sigma = [[1. , 0.6], [0.6, 1.]]


  • 11
  • 2