Density of normal distribution as dimensions increase

The question I want to ask is this: how does the proportion of samples within 1 SD of the mean of a normal distribution vary as the number of variates increases?

(Almost) everyone knows that in a 1 dimensional normal distribution, 68% of samples can be found within 1 standard deviation of the mean. What about in 2, 3, 4, … dimensions? I know it gets less… but by how much (precisely)? It would be handy to have a table showing the figures for 1, 2, 3… 10 dimensions, as well as 1, 2, 3… 10 SDs. Can anyone point to such a table?

A little more context – I have a sensor which provides data on up to 128 channels. Each channel is subject to (independent) electrical noise. When I sense a calibration object, I can average a sufficient number of measurements and get a mean value across the 128 channels, along with 128 individual standard deviations.

BUT… when it comes to the individual instantaneous readings, the data does not respond so much like 128 individual readings as it does like a single reading of an (up to) 128-dimensonal vector quantity. Certainly this is the best way to treat the few critical readings we take (typically 4-6 of the 128).

I want to get a feel for what is “normal” variation and what is “outlier” in this vector space. I’m sure I’ve seen a table like the one I described that would apply to this kind of situation – can anyone point to one?


Lets take X = (X_1,\dots,X_d) \sim N(0,I) : each X_i is normal N(0,1) and the X_i are independant – I guess that’s what you mean with higher dimensions.

You would say that X is within 1 sd of the mean when ||X|| < 1 (the distance between X and its mean value is lower than 1). Now ||X||^2 = X_1^2 +\cdots+X_d^2\sim \chi^2(d) so this happens with probability P( \xi < 1 ) where \xi\sim\chi^2(d). You can find this in good chi square tables...

Here are a few values:

d& P(\xi < 1)\\
1 & 0.68\\
2 & 0.39 \\
3 & 0.20 \\
4 & 0.090 \\
5 & 0.037 \\
6 & 0.014 \\
7 & 0.0052 \\
8 & 0.0018\\
9 & 0.00056\\
10& 0.00017\\

And for 2 sd:

d & P(\xi < 4)\\
1 & 0.95\\
2 & 0.86\\
3 & 0.74\\
4 & 0.59\\
5 & 0.45\\
6 & 0.32\\
7 & 0.22\\
8 & 0.14\\
9 & 0.089\\
10 & 0.053\\

You can get these values in R with commads like pchisq(1,df=1:10), pchisq(4,df=1:10), etc.

Post Scriptum As cardinal pointed out in the comments, one can estimate the asymptotic behaviour of these probabilities. The CDF of a \chi^2(d) variable is
F_d(x) = P(d/2,x/2) = {\gamma(d/2, x/2) \over \Gamma(d/2)}
where \gamma(s,y) = \int_0^y t^{s-1} e^{-t} \mathrm d t is the incomplete \gamma-function, and classicaly \Gamma(s) = \int_0^\infty t^{s-1} e^{-t} \mathrm d t.

When s is an integer, repeated integration by parts shows that
P(s,y) = e^{-y} \sum_{k=s}^\infty {y^k \over k!},
which is the tail of the CDF of the Poisson distribution.

Now this sum is dominated by its first term (many thanks to cardinal): P(s,y) \sim {y^s \over s!} e^{-y} for big s. We can apply this when d is even:
P(\xi < x) = P(d/2,x/2) \sim {1 \over (d/2)!} \left({x\over 2}\right)^{d/2} e^{-x/2} \sim {1\over\sqrt{\pi d}}e^{{1\over 2}(d-x)} \left({x\over d}\right)^{d\over 2} \sim {1\over\sqrt\pi} e^{-{1\over 2}x} d^{-{1\over 2}d},
for big even d, the penultimate equivalence using Stirling formula. From this formula we see that the asymptotic decay is very fast as d increase.

Source : Link , Question Author : omatai , Answer Author : Elvis

Leave a Comment