Rate at which a Gaussian random variable is the maximum in a set of independent Gaussian random variables [duplicate]

Assume a random vector X = [x_1, x_2, ..., x_n] where the random variables are independent and Gaussian distributed. At the same time, they are not identically distributed; they can have arbitrary means and variances.

What is the probability that x_k > x_i \forall i \neq k? In other words, what is the probability that a realization of the random vector will yield x_k as the maximum value in the vector? I'm looking for a closed form solution if one exists.

Here's as far as I got addressing this problem. Assume k=1 without loss of generality.

P\left(x_1 > x_i \forall_{i > 1}\right) = \int_{-\infty}^\infty p(x_1)P\left(x_1 > x_i \forall_{i>1}|x_1\right)dx_1

P\left(x_1 > x_i \forall_{i > 1}\right) = \int_{-\infty}^\infty p(x_1)\prod_{i=2}^n P\left(x_1 > x_i|x_1\right)dx_1

P\left(x_1 > x_i \forall_{i > 1}\right) = \int_{-\infty}^\infty p(x_1)\prod_{i=2}^nF_i(x_1)dx_1

where F_i(x) is the cumulative distribution function for x_i.

I'm honestly not sure where to go from here. Any suggestions on whether there is a way forward or whether there is a better approach? Numerical integration is an option to move forward but I'd prefer a closed form solution if possible as it would open up other options for investigation in a larger problem I'm attacking.

Thanks!

UPDATE: The previous question-answer pair marked as providing an answer to this question is not for two reasons. (1) The question is seeking the probability that x_k is the minimum, not the maximum and (2) no closed form solution is offered. If a closed form solution was derived, there might have been a similar approach in the answer that could have been leveraged here. But that is simply not present.

UPDATE-2: There is now a closed form solution proposed to the related problem for the minimum that provides the basis for solving the question posed here.

Answer

\newcommand{\N}{\mathcal N}\newcommand{\tr}{\mathrm{tr}}Restating the question: let X \sim \N(\mu, \Sigma), where \Sigma = \mathrm{diag}(\sigma_1^2, \dots, \sigma_n^2)). What is \Pr(\forall i \ne k, X_k \ge X_i)?

Now, this happens if and only if the (n-1)-dimensional random vector Y, where we drop the kth component and subtract each other dimension from X_k, is componentwise positive. Define the (n-1) \times n matrix A by taking the negative of (n-1) identity and inserting a column of all 1s as the kth column, so that Y = A X: if k = n, this is A = \begin{bmatrix}-I & 1\end{bmatrix}.

Then Y \sim \N(A \mu, A \Sigma A^T). A \mu is easy, and A \Sigma A^T ends up dropping the kth row and column from \Sigma and then adding \sigma_k^2 to each entry: \Sigma' + \sigma_k^2 1 1^T, where \Sigma' drops k from \Sigma.

Now, the probability in question is known naturally enough as a "Gaussian orthant probability". In general, these are difficult to get in closed form (here are a few; there are a bunch of algorithms out there to approximate them).

But we have a special form of covariance matrix here (a particularly simple rank-1 plus diagonal), which may yield a reasonable solution. Below is an effort towards that, but spoiler warning: I don't get to a closed form.


The probability in question is:
\begin{align}
\Pr\left( Y > 0 \right)
= \int_{y \in (0, \infty)^{n-1}} \left( 2 \pi \right)^{-\frac{n-1}{2}} \lvert A \Sigma A^T \rvert^{-\frac12} \exp\left( -\frac12 (y - A \mu)^T (A \Sigma A^T)^{-1} (y - A \mu) \right) \mathrm{d}y
.\end{align}

To avoid those pesky A \mu terms, define Z = Y - A \mu, \mathcal Z = \{ z : \forall i, z_i > (- A \mu)_i \}. Then we care about
\begin{align}
\Pr\left( Z > - A \mu \right)
= \int_{z \in \mathcal{Z}} \left( 2 \pi \right)^{-\frac{n-1}{2}} \lvert A \Sigma A^T \rvert^{-\frac12} \exp\left( -\frac12 z^T (A \Sigma A^T)^{-1} z \right) \mathrm{d}z
.\end{align}

Applying the matrix determinant lemma:
\begin{align}
\lvert A \Sigma A^T \rvert
&= \left\lvert \Sigma' + \sigma_k^2 1 1^T \right\rvert
\\&= (1 + \sigma_k^2 1^T \Sigma^{-1} 1) \lvert \Sigma^{-1} \rvert
\\&= \left( 1 + \sum_{i \ne k} \frac{\sigma_k^2}{\sigma_i^2} \right) \prod_{i \ne k} \frac{1}{\sigma_i^2}
,\end{align}

so at least the normalization constant is easy enough.

To tackle the exponent, apply Sherman-Morrison:
\begin{align}
\left( A \Sigma A^T \right)^{-1}
&= \left( \Sigma' + \sigma_k^2 1 1^T \right)^{-1}
\\&= \Sigma'^{-1} - \frac{\sigma_k^2 \Sigma'^{-1} 1 1^T \Sigma'^{-1}}{1 + \sigma_k^2 1^T \Sigma'^{-1} 1}
\\&= \Sigma'^{-1} - \frac{1}{\frac{1}{\sigma_k^2} + \sum_{i \ne k} \frac{1}{\sigma_i^2}} \left[ \frac{1}{\sigma_i^2 \sigma_j^2} \right]_{ij}
\\&= \Sigma'^{-1} - \frac{1}{\tr(\Sigma^{-1})} \left[ \frac{1}{\sigma_i^2 \sigma_j^2} \right]_{ij}
\\
z^T (A \Sigma A^T)^{-1} z
&= \sum_i \frac{z_i^2}{\sigma_i^2}
- \frac{1}{\tr(\Sigma^{-1})}
\sum_{ij} \frac{z_i z_j}{\sigma_i^2 \sigma_j^2}
\end{align}

and then the integral (after pulling out constants) is
\begin{align}
\int_{z \in \mathcal{Z}} &\exp\left( - \tfrac12 z^T (A \Sigma A^T)^{-1} z \right) \mathrm{d}z
\\&= \int_{z \in \mathcal{Z}} \prod_i \exp\left( - \frac{z_i^2}{2 \sigma_i^2} \right) \prod_{ij} \exp\left( \frac{1}{2 \tr(\Sigma^{-1})} \frac{z_i z_j}{\sigma_i^2 \sigma_j^2} \right) \mathrm{d}z
.\end{align}

This integral seems amenable to something smarter than just blind numerical integration, but it's late now....

Attribution
Source : Link , Question Author : Chris , Answer Author : Community

Leave a Comment