Asymptotic normality of order statistic of heavy tailed distributions

Background:
I have a sample which I want to model with a heavy tailed distribution. I have some extreme values, such that the spread of the observations are relatively large. My idea was to model this with a generalized Pareto distribution, and so I have done. Now, the 0.975 quantile of my empirical data (about 100 datapoints) is lower than the 0.975 quantile of the Generalized Pareto distribution that I fitted to my data. Now, I thought, is there some way to check if this difference is something to worry about?

We know that the asymptotic distribution of the quantiles are given as:

asymptotic normality of the quantiles

So I thought that it would be a good idea to entertain my curiosity by trying to plot the 95% confidence bands around the 0.975 quantile of a generalized Pareto distribution with the same parameters as I got from the fitting of my data.

GPD

As you see, we are working with some extreme values here. And since the spread is so enormous, the density function has extremely small values, making the confidence bands go to the order of ±1012 using the variance of the asymptotic normality formula above:

±1.960.9750.025n(fGPD(q0.975))2

So, this does not make any sense. I have a distribution with only positive outcomes, and the confidence intervals include negative values. So something is going on here. If I calculate the bands around the 0.5 quantile, the bands are not that huge, but still huge.

I proceed to see how this goes with another distribution, namely the N(1,1) distribution.
Simulate n=100 observations from a N(1,1) distribution, and check if the quantiles are within the confidence bands. I do this 10000 times to see the proportions of the 0.975/0.5 quantiles of the simulated observations that are within the confidence bands.

    ################################################
# Test at the 0.975 quantile
################################################

#normal(1,1)

#find 0.975 quantile
q_norm<-qnorm(0.975, mean=1, sd=1)
#find density value at 97.5 quantile:
f_norm<-dnorm(q_norm, mean=1, sd=1)
#confidence bands absolute value:
band=1.96*sqrt((0.975*0.025)/(100*(f_norm)^2))
u=q_norm+band
l=q_norm-band

hit<-1:10000
for(i in 1:10000){
  d<-rnorm(n=100, mean=1, sd=1)
  dq<-quantile(d, probs=0.975)

  if(dq[[1]]>=l & dq[[1]]<=u) {hit[i]=1} else {hit[i]=0} 

}
sum(hit)/10000

#################################################################3
# Test at the 0.5 quantile  
#################################################################
#using lower quantile:

#normal(1,1)

#find 0.7 quantile
q_norm<-qnorm(0.7, mean=1, sd=1)
#find density value at 0.7 quantile:
f_norm<-dnorm(q_norm, mean=1, sd=1)
#confidence bands absolute value:
band=1.96*sqrt((0.7*0.3)/(100*(f_norm)^2))
u=q_norm+band
l=q_norm-band

hit<-1:10000
for(i in 1:10000){
  d<-rnorm(n=100, mean=1, sd=1)
  dq<-quantile(d, probs=0.7)

  if(dq[[1]]>=l & dq[[1]]<=u) {hit[i]=1} else {hit[i]=0} 

} 
sum(hit)/10000

EDIT: I fixed the code, and both quantiles gives approximately 95% hits with n=100 and with σ=1. If I crank up the standard deviation to σ=2, then very few hits are within the bands. So question still stands.

EDIT2: I retract what I claimed in the first EDIT above, as pointed out in the comments by a helpful gentleman. It actually looks like these CI’s are good for the normal distribution.

Is this asymptotic normality of the order statistic just a very bad measure to use, if one wants to check if some observed quantile is probable given a certain candidate distribution?

Intuitively, it seems to me like there is a relationship between the variance of the distribution (which one thinks created the data, or in my R example, which we know created the data) and the number of observations. If you have 1000 observations and an enormous variance, these bands are bad. If one has 1000 observations and a small variance, these bands would maybe make sense.

Anybody care to clear this up for me?

Answer

I’m assuming your derivation there comes from something like the one on this page.

I have a distribution with only positive outcomes, and the confidence intervals include negative values.

Well, given the normal approximation that makes sense. There is nothing stopping a normal approximation from giving you negative values, which is why it is a bad approximation for a bounded value when the sample size is small and/or the variance is large. If you crank up the sample size, then the intervals will shrink because the sample size is in the denominator of the expression for the width of the interval. The variance enters the problem through the density: for the same mean, a higher variance will have a different density, higher at the margins and lower near the center. A lower density means a wider confidence interval because the density is in the denominator of the expression. How the effects of changing sample size and variance together affect the width of the confidence interval and the quality of the approximation will depend on the distribution generating the data as well as the particular quantile.

A bit of googling found this page, among others, which uses the normal approximation to the binomial distribution to construct the confidence limits. The basic idea is that each observation falls below the quantile with probability q, so that the distribution is binomial. When the sample size is sufficiently large (that’s important), the binomial distribution is well approximated by a normal distribution with mean nq and variance nq(1q). So the lower confidence limit will have index j=nq1.96nq(1q), and the upper confidence limit will have index k=nq1.96nq(1q). There’s a possibility that either k>n or j<1 when working with quantiles near the edge, and the reference I found is silent on that. I chose to just treat the maximum or minimum as the relevant value.

In the following re-write of your code I constructed the confidence limit on the empirical data and tested to see if the theoretical quantile falls inside of that. That makes more sense to me, because the quantile of the observed data set is the random variable. The coverage for n > 1000 is ~ 0.95. For n = 100 it is worse at 0.85, but that's to be expected for quantiles near the tails with small sample sizes.

#find 0.975 quantile
q <- 0.975
q_norm <- qnorm(q, mean=1, sd=1)

#confidence bands absolute value (note depends on sample size)
n <- 10000
band <- 1.96 * sqrt(n * q * (1 - q))

hit<-1:10000
for(i in 1:10000){
  d<-sort(rnorm(n, mean=1, sd=1))
  dq<-quantile(d, probs=q)
  u <- ceiling(n * q + band)
  l <- ceiling(n * q - band)
  if (u > n) u = n
  if (l < 1) l = 1
  if(q_norm>=d[l] & q_norm<=d[u]) {hit[i]=1} else {hit[i]=0} 

}
sum(hit)/10000

As far as determining what sample size is "big enough", well, bigger is better. Whether any particular sample is "big enough" depends strongly on the problem at hand, and how fussy you are about things like the coverage of your confidence limits.

Attribution
Source : Link , Question Author : Erosennin , Answer Author : atiretoo

Leave a Comment