If FZ is a CDF, it looks like FZ(z)α (α>0) is a CDF as well.
Q: Is this a standard result?
Q: Is there a good way to find a function g with X≡g(Z) s.t. FX(x)=FZ(z)α, where x≡g(z)
Basically, I have another CDF in hand, FZ(z)α. In some reduced form sense I’d like to characterize the random variable that produces that CDF.
EDIT: I’d be happy if I could get an analytical result for the special case Z∼N(0,1). Or at least know that such a result is intractable.
Answer
I like the other answers, but nobody has mentioned the following yet. The event {U≤t, V≤t} occurs if and only if {max(U,V)≤t}, so if U and V are independent and W=max(U,V), then FW(t)=FU(t)∗FV(t) so for α a positive integer (say, α=n) take X=max(Z1,...Zn) where the Z‘s are i.i.d.
For α=1/n we can switcheroo to get FZ=FnX, so X would be that random variable such that the max of n independent copies has the same distribution as Z (and this would not be one of our familiar friends, in general).
The case of α a positive rational number (say, α=m/n) follows from the previous since
(FZ)m/n=(F1/nZ)m.
For α an irrational, choose a sequence of positive rationals ak converging to α; then the sequence Xk (where we can use our above tricks for each k) will converge in distribution to the X desired.
This might not be the characterization you are looking for, but it least gives some idea of how to think about FαZ for α suitably nice. On the other hand, I’m not really sure how much nicer it can really get: you already have the CDF, so the chain rule gives you the PDF, and you can calculate moments till the sun sets…? It’s true that most Z‘s won’t have an X that’s familiar for α=√2, but if I wanted to play around with an example to look for something interesting I might try Z uniformly distributed on the unit interval with F(z)=z, 0<z<1.
EDIT: I wrote some comments in @JMS answer, and there was a question about my arithmetic, so I'll write out what I meant in the hopes that it's more clear.
@cardinal correctly in the comment to @JMS answer wrote that the problem simplifies to
g−1(y)=Φ−1(Φα(y)),
or more generally when Z is not necessarily N(0,1), we have
x=g−1(y)=F−1(Fα(y)).
My point was that when F has a nice inverse function we can just solve for the function y=g(x) with basic algebra. I wrote in the comment that g should be
y=g(x)=F−1(F1/α(x)).
Let's take a special case, plug things in, and see how it works. Let X have an Exp(1) distribution, with CDF
F(x)=(1−e−x), x>0,
and inverse CDF
F−1(y)=−ln(1−y).
It is easy to plug everything in to find g; after we're done we get
y=g(x)=−ln(1−(1−e−x)1/α)
So, in summary, my claim is that if X∼Exp(1) and if we define
Y=−ln(1−(1−e−X)1/α),
then Y will have a CDF which looks like
FY(y)=(1−e−y)α.
We can prove this directly (look at P(Y≤y) and use algebra to get the expression, in the next to the last step we need the Probability Integral Transform). Just in the (often repeated) case that I'm crazy, I ran some simulations to double-check that it works, ... and it does. See below. To make the code easier I used two facts:
If X∼F then U=F(X)∼Unif(0,1).
If U∼Unif(0,1) then U1/α∼Beta(α,1).
The plot of the simulation results follows.
The R code used to generate the plot (minus labels) is
n <- 10000; alpha <- 0.7
z <- rbeta(n, shape1 = alpha, shape2 = 1)
y <- -log(1 - z)
plot(ecdf(y))
f <- function(x) (pexp(x, rate = 1))^alpha
curve(f, add = TRUE, lty = 2, lwd = 2)
The fit looks pretty good, I think? Maybe I'm not crazy (this time)?
Attribution
Source : Link , Question Author : lowndrul , Answer Author : Community