# Rate at which a Gaussian random variable is the maximum in a set of independent Gaussian random variables [duplicate]

Assume a random vector $X = [x_1, x_2, ..., x_n]$ where the random variables are independent and Gaussian distributed. At the same time, they are not identically distributed; they can have arbitrary means and variances.

What is the probability that $x_k > x_i \forall i \neq k$? In other words, what is the probability that a realization of the random vector will yield $x_k$ as the maximum value in the vector? I'm looking for a closed form solution if one exists.

Here's as far as I got addressing this problem. Assume $k=1$ without loss of generality.

$P\left(x_1 > x_i \forall_{i > 1}\right) = \int_{-\infty}^\infty p(x_1)P\left(x_1 > x_i \forall_{i>1}|x_1\right)dx_1$

$P\left(x_1 > x_i \forall_{i > 1}\right) = \int_{-\infty}^\infty p(x_1)\prod_{i=2}^n P\left(x_1 > x_i|x_1\right)dx_1$

$P\left(x_1 > x_i \forall_{i > 1}\right) = \int_{-\infty}^\infty p(x_1)\prod_{i=2}^nF_i(x_1)dx_1$

where $F_i(x)$ is the cumulative distribution function for $x_i$.

I'm honestly not sure where to go from here. Any suggestions on whether there is a way forward or whether there is a better approach? Numerical integration is an option to move forward but I'd prefer a closed form solution if possible as it would open up other options for investigation in a larger problem I'm attacking.

Thanks!

UPDATE: The previous question-answer pair marked as providing an answer to this question is not for two reasons. (1) The question is seeking the probability that $x_k$ is the minimum, not the maximum and (2) no closed form solution is offered. If a closed form solution was derived, there might have been a similar approach in the answer that could have been leveraged here. But that is simply not present.

UPDATE-2: There is now a closed form solution proposed to the related problem for the minimum that provides the basis for solving the question posed here.

$\newcommand{\N}{\mathcal N}\newcommand{\tr}{\mathrm{tr}}$Restating the question: let $X \sim \N(\mu, \Sigma)$, where $\Sigma = \mathrm{diag}(\sigma_1^2, \dots, \sigma_n^2))$. What is $\Pr(\forall i \ne k, X_k \ge X_i)$?

Now, this happens if and only if the $(n-1)$-dimensional random vector $Y$, where we drop the $k$th component and subtract each other dimension from $X_k$, is componentwise positive. Define the $(n-1) \times n$ matrix $A$ by taking the negative of $(n-1)$ identity and inserting a column of all $1$s as the $k$th column, so that $Y = A X$: if $k = n$, this is $A = \begin{bmatrix}-I & 1\end{bmatrix}$.

Then $Y \sim \N(A \mu, A \Sigma A^T)$. $A \mu$ is easy, and $A \Sigma A^T$ ends up dropping the $k$th row and column from $\Sigma$ and then adding $\sigma_k^2$ to each entry: $\Sigma' + \sigma_k^2 1 1^T$, where $\Sigma'$ drops $k$ from $\Sigma$.

Now, the probability in question is known naturally enough as a "Gaussian orthant probability". In general, these are difficult to get in closed form (here are a few; there are a bunch of algorithms out there to approximate them).

But we have a special form of covariance matrix here (a particularly simple rank-1 plus diagonal), which may yield a reasonable solution. Below is an effort towards that, but spoiler warning: I don't get to a closed form.

The probability in question is:

To avoid those pesky $A \mu$ terms, define $Z = Y - A \mu$, $\mathcal Z = \{ z : \forall i, z_i > (- A \mu)_i \}$. Then we care about

Applying the matrix determinant lemma:

so at least the normalization constant is easy enough.

To tackle the exponent, apply Sherman-Morrison:

and then the integral (after pulling out constants) is

This integral seems amenable to something smarter than just blind numerical integration, but it's late now....