Probability of an independent Poisson process overtaking another

I have asked this question before in another fashion on other stackexchanges, so sorry for the somewhat repost.

I have asked my professor and a couple of PhD students about, without a definitive answer. I will first state the problem, then my potential solution and the problem with my solution, so sorry for the wall of text.

The problem:

Assume two independent Poisson processes M and R, with λR and λM for the same interval, subject to λR>λM. What is the probability that at any point in time, as time tends to infinity, that the aggregate output of process M is larger than the aggregate output of process R plus D, i.e. P(M>R+D). To illustrate with an example, assume two bridges R and M, on average λR and λM cars drive over the bridge R and M respectively per interval, and λR>λM. D cars have already driven over bridge R, what is the probability that at any point in time more cars in total have driven over bridge M than R.

My way of solving this problem:

First we define two Poisson processes:

M(I)Poisson(μMI)R(I)Poisson(μRI)

The next step is to find a function that describes P(M>R+D) after a given number of intervals I. This will happen in case M(I)>k+D conditional on the output of R(I)=k, for all non-negative values of k. To illustrate, if the aggregate output of R is X then the aggregate output of M needs to be larger than X+D. As shown below.

P(M(I))>R(I)+D)=nk=0[P(M(I)>k+DR(I)=k)]

n

Due to independence this can be rewritten as the product of the two elements, where the first element is 1-CDF of the Poisson distribution and the second element is the Poisson pmf:

P(M(I)>R(I)+D)=nk=0[P(M(I)>k+D)1Poisson CDFP(R(I)=k)Poisson pmf]

n

To create an example, assume D=6, λR=0.6 and λM=0.4, below is the graph of that function over I:

enter image description here

The next step is to find the probability of this happening at any point in time, lets call that Q. My thought is that this is equivalent to finding 1 minus the probability of M never being above R+D. I.e let N approach infinity what is P(R(N)+DM(N)) conditional on this also being true for all previous values of N.

P(R(I)+DM(I)) is the same as 1P(M(I)>R(I)+D), lets define that as function g(I):

g(I)=1P(M(I)>R(I)+D)

As N tends to infinity, this can also be rewritten as the geometrical integral over function g(I).

Q=1exp(N0ln(g(I))dI)

Q=1exp(N0ln(1P(M(I)>R(I)+D))dI)

N

Where we have the function of P(M(I)>R(I)+D) from above.

Q=1exp(N0ln(1nk=0[P(M(I)>k+D)1Poisson CDFP(R(I)=k)Poisson pmf])dI)

N

n

Now to me this should give me the final value of Q, for any given D, λR and λM. However, there is a problem, we should be able to rewrite the lambdas as we want as the only thing that should matter is their proportion to each other. To build on the example from before with D=6, λR=0.6 and λM=0.4, this is effectively the same as D=6, λR=0.06 and λM=0.04, as long as their interval is divided by 10. I.e. 10 cars every 10 minutes is the same as 1 car every minute. However, doing this produces a different result. D=6, λR=0.6 and λM=0.4 yields a Q of 0.5856116 and D=6, λR=0.06 and λM=0.04 yields a Q of 0.9998507. The immediate realization is that 1(10.5856116)10=0.9998507, and the reason is actually fairly simple if we compare the graphs of the two outcomes, the graph below shows the function for D=6, λR=0.06 and λM=0.04.

enter image description here

As can be seen the probability does not change, however it now takes ten times as many intervals to get to the same probability. As Q is dependent on the interval of the function this naturally has an implication. This obviously means that something is wrong, as the result should not depend on my starting lambda, especially because there is no starting lambda that is correct 0.04 and 0.06 is as correct as 0.4 and 0.6 or 1 and 1.5 etc, as long as the interval is scaled accordingly. Therefore, while I can easily scale the probability, i.e. going from 0.4 and 0.6 to 0.04 and 0.06 is the same as scaling the probability with a factor of 10. This obviously produce the same result, but as all of these lambdas are equally valid starting points, then this is obviously not correct.

To show this impact I graphed Q as a function of t, where t is a scaling factor of the lambdas, with starting lambdas of λM=0.4 and λR=λM1.5. The output can be seen in the graph below:

enter image description here

This is where I am stuck, to me the approach looks fine and correct, but the result is obviously wrong. My initial thought is that I am missing a fundamental re-scale somewhere, but I can’t for the life of me figure out where.

Thank you for reading, any and all help is greatly appreciated.

Additionally, if anyone wants my R-code please let me know and I will upload it.

Answer

Let the collective times of the processes be T=(0=t0<t1<t2<). Because these are independent Poisson processes, almost surely exactly one of them is observed at each of these times. For i>0, define

B(i)={+1if R(ti)=11if M(ti)=1

and accumulate the B(i) into the process W: that is, W(0)=0 and W(i+1)=W(i)+B(i) for all i>0. W(i) counts how many more times R has appeared than M just after time ti.

Figure: simulation

This figure shows realizations of R (in red) and M (in medium blue) as “rug plots” at the top. The points plot the values of (ti,W(i)). Each red point represents an increase in the excess R(ti)M(ti) while each blue point shows a decrease in the excess.

For b=0,1,2,, let Eb be chance at least one of the Wi is less than or equal to b and let f(b) be its probability.

The question asks for f(D+1).

Let λ=λR+λM. This is the rate of the combined processes. W is a Binomial random walk, because

Pr

Thus,

The answer equals the chance that this Binomial random walk W encounters an absorbing barrier at -D-1.

The most elementary way to find this chance observes that

f(0)=1

because W(0)=0; and, for all b \gt 0, the two possible next steps of \pm 1 recursively yield

f(b) = \frac{\lambda_R}{\lambda} f(b+1) + \frac{\lambda_M}{\lambda} f(b-1).

Assuming \lambda_R \ge \lambda_M, the unique solution for b\ge 0 is

f(b) = \left(\frac{\lambda_M}{\lambda_R}\right)^b,

as you can check by plugging this into the foregoing defining equations. Thus,

The answer is \Pr(\mathcal{E}_{D+1})=f(D+1)=\left(\frac{\lambda_M}{\lambda_R}\right)^{D+1}.

Attribution
Source : Link , Question Author : no nein , Answer Author : whuber

Leave a Comment