This is a follow-on from my previous question about samples from a distribution.

Suppose $X_1 \ldots X_{n-1}, X_n$ are random variables all following some fixed distribution $D$. How do I prove that $P(X_n > X_1 \wedge \ldots \wedge X_n > X_{n-1}) = \frac{1}{n}$ (this is true intuitively and experimentally).

**Answer**

Here’s a slightly more notational proof, since sometimes people feel squeamish about intuitive proofs like glen_b’s comment. (Sometimes for good reason, since it’s not necessarily immediately obvious that his proof doesn’t apply to discrete distributions.)

Suppose that $X_i$ are distributed iid according to some distribution $D$.

Let $M_i$ be the event that $X_i = \max(X_1, \dots, X_n)$.

Clearly, at least one of the $M_i$ must hold, so

$\Pr(M_1 \cup \dots \cup M_n) = 1$.

But, using the inclusion-exclusion principle

\begin{align*}

\Pr(M_1 \cup \dots \cup M_n)

&= \sum_i Pr(M_i) – \sum_{i < j} \Pr(M_i \cap M_j) + \sum_{i < j < k} \Pr(M_i \cap M_j \cap M_k) – \dots

\end{align*}

If $D$ is continuous, then $\Pr(M_i \cap M_j) = 0$ for all $i \ne j$; all the latter terms drop. Also, since the $X_i$ are identically distributed clearly $\Pr(M_1) = \Pr(M_i)$ for all $i$. Thus

$$ \Pr(M_1 \cup \dots \cup M_n) = \sum_i \Pr(M_i) = n \Pr(M_1) = 1,$$

so $\Pr(M_1) = \frac{1}{n}$.

If $D$ is not continuous, the higher terms don’t drop out. Taking @Yair Daon’s example where $X_i$ is identically 1, every $M_i$ always holds, and the sum becomes

$$

\Pr(M_1 \cup \dots \cup M_n) = \sum_i 1 – \sum_{i<j} 1 + \sum_{i<j<k} 1 – \dots

= \sum_{k=1}^n (-1)^{k+1} \binom{n}{k} = 1

.$$

**Attribution***Source : Link , Question Author : REFlint , Answer Author : Danica*