I was in a seminar today and the lecturer said that the gaussian distribution is isotropic. What does it mean for a distribution to be isotropic? It seems like he is using this property for the pseudoindependence of vectors where each entry is sampled from the normal distribution.

4In general, a multivariate normal distribution can be anisotropic depending on the covariance matrix. There has. clearly been some miscommunication somewhere along the way. – Brian Borchers Oct 30 '16 at 18:52
4 Answers
TLDR: An isotropic gaussian is one where the covariance matrix is represented by the simplified matrix $\Sigma = \sigma^{2}I$.
Some motivations:
Consider the traditional gaussian distribution:
$$ \mathcal{N}(\mu,\,\Sigma) $$
where $\mu$ is the mean and $\Sigma$ is the covariance matrix.
Consider how the number of free parameters in this Gaussian grows as the number of dimensions grows.
$\mu$ will have a linear growth. $\Sigma$ will have a quadratic growth!
This quadratic growth can be very computationally expensive, so $\Sigma$ is often restricted as $\Sigma = \sigma^{2}I$ where $\sigma^{2}I$ is a scalar variance multiplied by an identity matrix.
Note that this results in $\Sigma$ where all dimensions are independent and where the variance of each dimension is the same. So the gaussian will be circular/spherical.
Disclaimer: Not a mathematician, and I only just learned about this so may be missing some things :)
Hope that helps!
 956
 7
 5

4The variables of the multivariate Gaussian may not be independent, even if they have zero covariance. Covariance is a measure of only linear association. Independence implies zero covariance but not vice versa. See this [example](https://stats.stackexchange.com/questions/12842/covarianceandindependence). – mloning Nov 29 '19 at 09:10

4@mloning, your comment is a bit misleading here. When a random vector has as a multivariate normal, 0 covariance does indeed imply independence. All of this gets its own wikipedia page: https://en.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent The notation above suggests to me multivariate normality, so, it's fine. Another way in which the normal distribution is magical – RMurphy Apr 24 '20 at 19:56

1

1Another motivation: The conjugate prior of a multivariate normal distribution with unit variance is isotropic normal. Like you say, there's no need for all those parameters! – Neil G Dec 23 '20 at 06:19

Easy to understand for beginners. As I am not a native English speaker, I was confused by the word `isotropic`. – GoingMyWay Mar 09 '21 at 09:18
I'd just like to add a bit of visuals to the other answers.
When the variables are independent, i.e. the distrubtion is isotropic, it means that the distribution is aligned with the axis.
For example, for $\Sigma = \begin{pmatrix}1 & 0 \\ 0 & 30\end{pmatrix}$, you'd get something like this:
So, what happens when it is not isotropic? For example, when $\Sigma = \begin{pmatrix}1 & 15 \\ 15 & 30\end{pmatrix}$, the distribution appears "rotated", no longer aligned with the axes:
Note that this is just an example, the $\Sigma$ above is invalid since it is not PSD.
Code:
import numpy as np
from matplotlib import pyplot as plt
pts = np.random.multivariate_normal([0, 0], [[1,15],[15,31]], size=10000, check_valid='warn')
plt.scatter(pts[:, 0], pts[:, 1], s=1)
plt.xlim((30,30))
plt.ylim((30,30))
 279
 2
 6

3This seems to contradict the other answers, the matrix [1 0;0 30] is a diagonal matrix, but it can't be written as $\sigma I$ since not all diagonal entries are the same. I'm not sure which one it should be – Alex Li Sep 27 '20 at 20:34

1Isotropic means the same in all directions which the distribution in this example clearly is not. Aligned with the axes is not the same as isotropic in general, in fact it is only the same when the covariance matrix can be written as $\sigma I$ as other commenters have noted. – Epimetheus Sep 19 '21 at 20:00
I am not a math major student but I will give a try to describe my understanding: an isotropic gaussian distribution means a multidimensional gaussian distribution with its variance matrix as an identity matrix multiplied by the same number on its diagonal. Each dimension can be seen as an independent onedimension gaussian distribution (no covariance exists).
 191
 4
Thanks to Tomoiagă, valuable learning opportunity. Explanation for psd
in his answer.
w.r.t material in course Probabilistic Deep Learning with TensorFlow 2 in coursera.
the definition for positive semidefinite:
A symmetric matrix $M \in \mathbb{R}^{d\times d}$ is positive semidefinite if it satisfies $b^TMb \ge 0$ for all nonzero $b\in\mathbb{R}^d$. If, in addition, we have $b^TMb = 0 \Rightarrow > b=0$ then $M$ is positive definite.
In one word: the valid covariance matrix should be symmetry and positive (semi)definite. However, how to check the $\Sigma$ satisfied with the requirement?
Here comes The Cholesky decomposition
For every realvalued symmetric positivedefinite matrix $M$, there is a unique lowerdiagonal matrix $L$ that has positive diagonal entries for which
\begin{equation} LL^T = M \end{equation} This is called the Cholesky decomposition of $M$
Let's build some codes.
Given a psd matrix $ \Sigma = \begin{bmatrix} 10 & 5 \\ 5 & 10 \end{bmatrix}$ and a nonpsd $ \Sigma = \begin{bmatrix} 10 & 11 \\ 11 & 10 \end{bmatrix}$ as the covariance matrix
sigma = [[10., 5.],[5., 10.]]
np.linalg.cholesky(sigma)
Output:
<tf.Tensor: shape=(2, 2), dtype=float32, numpy= array([[3.1622777, 0. ],
[1.5811388, 2.738613 ]], dtype=float32)>
bad_sigma = [[10., 11.], [11., 10.]]
try:
scale_tril = tf.linalg.cholesky(bad_sigma)
except Exception as e:
print(e)
Output:
Cholesky decomposition was not successful. The input might not be valid.
For convenience, a lowertriangular matrix is easier to create.
Last, the demo with isotropic Gaussian and nonisotropic ones.
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
## isotropic normal
sigma = [[1., 0.],[0., 1.]]
lower_triangular = np.linalg.cholesky(sigma)
print(lower_triangular)
sigma = np.matmul(lower_triangular, np.transpose(lower_triangular))
##
bivariate_normal = np.random.multivariate_normal([0, 0], sigma, size=10000, check_valid='warn')
x1 = bivariate_normal[:, 0]
x2 = bivariate_normal[:, 1]
sns.jointplot(x1, x2, kind='kde', space=0, color='b')
# #nonisotropic normal
# sigma = [[1. , 0.6], [0.6, 1.]]
 11
 2