# What is the relation behind Jeffreys Priors and a variance stabilizing transformation?

I was reading about the Jeffreys prior on wikipedia: Jeffreys Prior and saw that after each example, it describes how a variance-stabilizing transformation turns the Jeffreys prior into a uniform prior.

As an example, for the Bernoulli case, it states that for a coin that is heads with probability $\gamma \in [0,1]$, the Bernoulli trial model yields that the Jeffreys prior for the parameter $\gamma$ is:

It then states that this is a beta distribution with $\alpha = \beta = \frac{1}{2}$. It also states that if $\gamma = \sin^2(\theta)$, then the Jeffreys prior for $\theta$ is uniform in the interval $\left[0, \frac{\pi}{2}\right]$.

I recognize the transformation as that of a variance-stabilizing transformation. What confuses me is:

1. Why would a variance-stabilizing transformation result in a uniform prior?

2. Why would we even want a uniform prior? (since it seems it may be more susceptible to being improper)

In general, I’m not quite sure why the squared-sine transformation is given and what role is plays. Would anyone have any ideas?

The Jeffreys prior is invariant under reparametrization. For that reason, many Bayesians consider it to be a “non-informative prior”. (Hartigan showed that there is a whole space of such priors $J^\alpha H^\beta$ for $\alpha + \beta=1$ where $J$ is Jeffreys’ prior and $H$ is Hartigan’s asymptotically locally invariant prior. — Invariant Prior Distributions)