Worm and Apple Expected Value

An apple is located at vertex $A$ of pentagon $ABCDE$, and a worm is
located two vertices away, at $C$. Every day the worm crawls with
equal probability to one of the two adjacent vertices. Thus after one
day the worm is at vertex $B$ or $D$, each with probability $1/2$ .
After two days, the worm might be back at $C$ again, because it has no
memory of previous positions. When it reaches vertex $A$, it stops to
dine.

(a) What is the mean of the number of days until dinner?

(b) Let p be the probability that the number of days is $100$ or more.
What does Markov’s Inequality say about $p$?

For (a), let $X$ be the random variable defined by the number of days until dinner. So $$ P(X = 0) = 0 \\ P(X=1) = 0 \\ P(X=2) = \frac{1}{\binom{5}{2}} \\ \vdots$$

What would be the general distribution?

For (b), if we know (a), then we know that $$P(X \geq 100) \leq \frac{E(X)}{100}$$

Answer

In the excellent answer by Glen_b, he shows that you can calculate the expected value analytically using a simple system of linear equations. Following this analytic method you can determine that the expected number of moves to the apple is six. Another excellent answer by whuber shows how to derive the probability mass function for the process after any given number of moves, and this method can also be used to obtain an analytic solution for the expected value. If you would like to see some further insight on this problem, you should read some papers on circular random walks (see e.g., Stephens 1963)

To give an alternative view of the problem, I am going to show you how you can get the same result using the brute force method of just calculating out the Markov chain using statistical computing. This method is inferior to analytical examination in many respects, but it has the advantage that it lets you to deal with the problem without requiring any major mathematical insight.


Brute force computational method: Taking the states in order $A,B,C,D,E$, your Markov chain transitions according to the following transition matrix:

$$\mathbf{P} = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 \\[6pt]
\tfrac{1}{2} & 0 & \tfrac{1}{2} & 0 & 0 \\[6pt]
0 & \tfrac{1}{2} & 0 & \tfrac{1}{2} & 0 \\[6pt]
0 & 0 & \tfrac{1}{2} & 0 & \tfrac{1}{2} \\[6pt]
\tfrac{1}{2} & 0 & 0 & \tfrac{1}{2} & 0 \\[6pt]
\end{bmatrix}$$

The first state is the absorbing state $A$ where the worm is at the apple. Let $T_C$ be the number of moves until the worm gets to the apple from state $C$. Then for all $n \in \mathbb{N}$ the probability that the worm is at the apple after this number of moves is $\mathbb{P}(T_C \leqslant n) = \{ \mathbf{P}^n \}_{C,A}$ and so the expected number of moves to get to the apple from this state is:

$$\mathbb{E}(T_C) = \sum_{n=0}^\infty \mathbb{P}(T_C > n) = \sum_{n=0}^\infty (1-\{ \mathbf{P}^n \}_{C,A}).$$

The terms in the sum decrease exponentially for large $n$ so we can compute the expected value to any desired level of accuracy by truncating the sum at a finite number of terms. (The exponential decay of the terms ensures that we can limit the size of the removed terms to be below a desired level.) In practice it is easy to take a large number of terms until the size of the remaining terms is extremely small.


Programming this in R: You can program this as a function in R using the code below. This code has been vectorised to generate an array of powers of the transition matrix for a finite sequence of moves. We also generate a plot of the probability that the apple has not been reached, showing that this decreases exponentially.

#Create function to give n-step transition matrix for n = 1,...,N
#N is the last value of n
PROB <- function(N) { P <- matrix(c(1, 0, 0, 0, 0, 
                                    1/2, 0, 1/2, 0, 0, 
                                    0, 1/2, 0, 1/2, 0,
                                    0, 0, 1/2, 0, 1/2,
                                    1/2, 0, 0, 1/2, 0),
                                  nrow = 5, ncol = 5, 
                                  byrow = TRUE);
                      PPP <- array(0, dim = c(5,5,N));
                      PPP[,,1] <- P;
                      for (n in 2:N) { PPP[,,n] <- PPP[,,n-1] %*% P; } 
                      PPP }

#Calculate probabilities of reaching apple for n = 1,...,100
N  <- 100;
DF <- data.frame(Probability = PROB(N)[3,1,], Moves = 1:N);

#Plot probability of not having reached apple
library(ggplot2);
FIGURE <- ggplot(DF, aes(x = Moves, y = 1-Probability)) +
          geom_point() +
          scale_y_log10(breaks = scales::trans_breaks("log10", function(x) 10^x),
                        labels = scales::trans_format("log10", 
                                 scales::math_format(10^.x))) +
          ggtitle('Probability that worm has not reached apple') +
          xlab('Number of Moves') + ylab('Probability');
FIGURE;

#Calculate expected number of moves to get to apple
#Calculation truncates the infinite sum at N = 100
#We add one to represent the term for n = 0
EXP <- 1 + sum(1-DF$Probability);
EXP;

[1] 6

As you can see from this calculation, the expected number of moves to get to the apple is six. This calculation was extremely rapid using the above vectorised code for the Markov chain.

enter image description here

Attribution
Source : Link , Question Author : probguy3434 , Answer Author : Ben

Leave a Comment