Statistical variation in two Formula 1 qualifying formats

I’ve just read this BBC article about the qualifying format in Formula 1.

Organisers wish to make qualifying less predictable, i.e. to increase statistical variation in the result. Glossing over a few irrelevant details, at the moment the drivers are ranked by their best single lap from (for concreteness) two attempts.

One F1 chief, Jean Todt, proposed that ranking drivers by the average of two laps would increase statistical variation, as drivers might be twice as likely to make a mistake. Other sources argued that any averaging would surely decrease statistical variation.

Can we say who is right under reasonable assumptions? I suppose it boils down to the relative variance of \text{mean}(x,y) versus \text{min}(x,y), where x and y are random variables representing a driver’s two lap-times?


I think it depends on the distribution of lap times.

Let X,Y be independent, identically distributed.

  1. If P(X=0)=P(X=1)=\frac{1}{2}, then Var(\frac{X+Y}{2}) = \frac{1}{8} < Var( \min(X,Y)) = \frac{3}{16}.
  2. If, however, P(X=0) = 0.9, P(X=100)=0.1, then Var(\frac{X+Y}{2}) = 450 > Var( \min(X,Y)) = 99.

This is in line with the argument mentioned in the question about making a mistake (i.e., running an exceptionally long time with a small probability). Thus, we would have to know the distribution of lap times to decide.

Source : Link , Question Author : innisfree , Answer Author : sandris

Leave a Comment