# Expected rolls to roll every number on a dice an odd number of times

Our family has recently learned how to play a simple game called ‘Oh Dear’. Each player has six playing cards (Ace,2,3,4,5,6) turned face-up, and take turns to roll the dice. Whatever number the dice rolls, the corresponding card is turned over. The winner is the player to turn all their cards face down first, but if you roll the number of a card that has been turned face-down, then that card is turned face-up again (and you say ‘Oh Dear!’).

I want to work out the expected length of a game (in rolls of the dice). I’m interested first in working this out in the case of a single-player playing alone, and then also in the question of how the answer changes with multiple players. This is equivalent to working out the expected number of times a player must roll the dice to have rolled every number on the dice an odd number of times. (I assume a fair six-sided dice, but again would be interested in a more general solution too).

It is simple to work out the odds of winning as quickly as possible from any position, but I’m not sure how to go about calculating the expected number of rolls before a player would win…

You can think about your problem as a Markov chain, i.e., a set of states with certain transition probabilities between states. You start in one state (all cards face up) and end up in an absorbing state (all cards face down). Your question is about the expected number of steps until you reach that absorbing state, either for a single chain, or for the expected minimum number of steps across $$nn$$ independent Markov chains running simultaneously.

And there are actually two slightly different ways to look at this. The first one, as whuber comments, is to consider the six cards as six different bits $${0,1}\{0,1\}$$ and consider the state as a six-vector in $${0,1}6\{0,1\}^6$$, i.e., the six-dimensional discrete hypercube. We start out at the vertex $$(0,0,0,0,0,0)(0,0,0,0,0,0)$$, and the absorbing state is $$(1,1,1,1,1,1)(1,1,1,1,1,1)$$. A step can take us to an adjacent vertex, in which exactly one bit is flipped with respect to the original state. That is, transitions take us from one vertex to any neighboring one with Hamming distance exactly one, and each such neighbor has an equal probability of being the next state.

There is some literature on random walks and Markov chains on discrete cubes with Hamming distances, but nothing I could locate at short notice. We have a very nice thread on Random walk on the edges of a cube, which might be interesting.

The second way to look at this is to use the fact that all cards are interchangeable (assuming a fair die). Then we can use just seven different states, corresponding to the number of cards face down. We start in the state $$i=0i=0$$, and the absorbing state is $$i=6i=6$$. The transition probabilities depend on the state we are in:

• From $$i=0i=0$$ (all cards face up), we will flip one card down and end up with one card face down with certainty: we have the transition probability $$p01=1p_{01}=1$$ (and $$p0j=0p_{0j}=0$$ for $$j≠1j\neq 1$$).
• From $$i=1i=1$$, we can reach $$j=0j=0$$ with probability $$p10=16p_{10}=\frac{1}{6}$$ and $$j=2j=2$$ with probability $$p12=56p_{12}=\frac{5}{6}$$.

Overall, we get the following transition matrix:

$$T=(066000001605600000260460000036036000004602600000560160000001) T=\begin{pmatrix} 0 & \frac{6}{6} & 0 & 0 & 0 & 0 & 0 \\ \frac{1}{6} & 0 & \frac{5}{6} & 0 & 0 & 0 & 0 \\ 0 & \frac{2}{6} & 0 & \frac{4}{6} & 0 & 0 & 0 \\ 0 & 0 & \frac{3}{6} & 0 & \frac{3}{6} & 0 & 0 \\ 0 & 0 & 0 & \frac{4}{6} & 0 & \frac{2}{6} & 0 \\ 0 & 0 & 0 & 0 & \frac{5}{6} & 0 & \frac{1}{6} \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}$$

We start with certainty in the state $$i=0i=0$$. We can encode the probabilities for each state at a certain point with a vector $$v∈[0,1]7v\in[0,1]^7$$, and our starting state corresponds to $$v0=(1,0,0,0,0,0,0)v_0=(1,0,0,0,0,0,0)$$.

Here is a fundamental fact about Markov chains (which is easy to see and to prove via induction): the probabilities for the state after $$kk$$ transitions are given by $$vk=(T′)kv0v_k=(T')^kv_0$$. (That is $$TT$$ transposed. You can also work with row vectors $$vv$$, then you don’t need to transpose, but “$$v0Tkv_0T^k$$” takes a little getting used to.)

Thus, the probability that we have ended up in the absorbing state $$i=6i=6$$ after $$kk$$ steps is precisely the last entry in that vector, or $$vk[6]=((T′)kv0)[6]v_k[6]=((T')^kv_0)[6]$$. Of course, we could already have been in the absorbing state after $$k−1k-1$$ steps. So the probability that our Markov chain ends up in the absorbing state for the first time after $$kk$$ steps is

$$pk:=((T′)kv0)[6]−((T′)k−1v0)[6]. p_k := ((T')^kv_0)[6]-((T')^{k-1}v_0)[6].$$

We can numerically calculate $$pkp_k$$ for a large enough number of $$k≤Kk\leq K$$ such that $$∑Kk=0pk≈1\sum_{k=0}^Kp_k\approx 1$$, and there may even be a closed form solution. Then, given $$pkp_k$$, we can calculate the expectation as

$$∞∑k=0kpk≈K∑k=0kpk. \sum_{k=0}^\infty kp_k \approx \sum_{k=0}^K kp_k.$$

Next, assume we have $$nn$$ players, and we want to know after how many steps the game will end, i.e., when the first player has all their cards face down. We can easily calculate the probability $$qnkq_k^n$$ that at least one player has all cards face down after $$kk$$ or fewer steps by noting that

qnk=P(at least one player has all cards face down after k or fewer steps)=1−P(all n players need at least k+1 steps)=1−P(ONE player needs at least k+1 steps)n=1−(∞∑j=k+1pj)n=1−(1−k∑j=0pj)n. \begin{align*} q_k^n &= P(\text{at least one player has all cards face down after k or fewer steps}) \\ &= 1-P(\text{all n players need at least k+1 steps}) \\ &= 1-P(\text{ONE player needs at least k+1 steps})^n \\ &= 1-\bigg(\sum_{j=k+1}^\infty p_j\bigg)^n \\ &= 1-\bigg(1-\sum_{j=0}^k p_j\bigg)^n. \end{align*}

From this, we can derive the probability $$pnkp^n_k$$ that a game of $$nn$$ players ends after exactly $$kk$$ steps:

$$pnk=qnk−qnk−1=(1−k−1∑j=0pj)n−(1−k∑j=0pj)n. p^n_k = q_k^n-q_{k-1}^n = \bigg(1-\sum_{j=0}^{k-1} p_j\bigg)^n-\bigg(1-\sum_{j=0}^k p_j\bigg)^n.$$

And from this, in turn, we can again calculate the expected length of a game with $$nn$$ players:

$$∞∑k=0kpnk≈K∑k=0kpnk. \sum_{k=0}^\infty kp^n_k \approx \sum_{k=0}^K kp^n_k.$$

As I wrote above, there may be a closed form solution for the $$pkp_k$$, but for now, we can numerically evaluate them using R. I’m using $$K=10,000K=10,000$$, so that $$∑Kk=0pk=1\sum_{k=0}^K p_k=1$$ up to machine accuracy.

max_steps <- 10000
state_probabilities <- matrix(NA,nrow=max_steps+1,ncol=7,dimnames=list(0:max_steps,6:0))
state_probabilities[1,] <- c(1,0,0,0,0,0,0)
transition_matrix <- rbind(
c(0,6,0,0,0,0,0),
c(1,0,5,0,0,0,0),
c(0,2,0,4,0,0,0),
c(0,0,3,0,3,0,0),
c(0,0,0,4,0,2,0),
c(0,0,0,0,5,0,1),
c(0,0,0,0,0,0,6))/6

for ( kk in 1:max_steps ) {
state_probabilities[kk+1,] <- t(transition_matrix)%*%state_probabilities[kk,]
}

probs <- diff(state_probabilities[,7])
sum(probs)  # yields 1
sum(probs*seq_along(probs)) # yields 83.2

plot(probs[1:400],type="h",xlab="Number of steps",ylab="Probability",las=1)


Next, this is how we get the probabilities $$p4kp^4_k$$ for $$n=4n=4$$ players:

n_players <- 4

probs_minimum <- sapply(1:max_steps,
function(kk)(1-sum(probs[1:(kk-1)]))^n_players-(1-sum(probs[1:kk]))^n_players)
plot(probs_minimum[1:400],type="h",xlab="Number of steps",ylab="Probability",
las=1,main=paste(n_players,"players"))


Of course, four persons finish more quickly than a single person. For $$n=4n=4$$, we get an expected value of

sum(probs_minimum*seq_along(probs_minimum))
[1] 25.44876


Finally, I like to confirm calculations like this using simulation.

n_sims <- 1e5
steps_minimum <- rep(NA,n_sims)
pb <- winProgressBar(max=n_sims)
for ( ii in 1:n_sims ) {
setWinProgressBar(pb,ii,paste(ii,"of",n_sims))
set.seed(ii)    # for reproducibility
states <- matrix(FALSE,nrow=6,ncol=n_players)
n_steps <- 0
while ( TRUE ) {
n_steps <- n_steps+1
for ( jj in 1:n_players ) {
roll <- sample(1:6,1)
states[roll,jj] <- !states[roll,jj]
}
if ( any ( colSums(states) == 6 ) ) {
steps_minimum[ii] <- n_steps
break
}
}
}
close(pb)


The distribution of the numbers of steps needed in our $$10510^5$$ simulated games matches the calculated $$p4kp^4_k$$ rather well:

result <- structure(rep(0,length(probs_minimum)),.Names=seq_along(probs_minimum))
result[names(table(steps_minimum))] <- as.vector(table(steps_minimum))/n_sims
cbind(result,probs_minimum)[1:30,]
result probs_minimum
1  0.00000    0.00000000
2  0.00000    0.00000000
3  0.00000    0.00000000
4  0.00000    0.00000000
5  0.00000    0.00000000
6  0.06063    0.06031414
7  0.00000    0.00000000
8  0.08072    0.07919228
9  0.00000    0.00000000
10 0.08037    0.08026479
11 0.00000    0.00000000
12 0.07382    0.07543464
13 0.00000    0.00000000
14 0.06826    0.06905406
15 0.00000    0.00000000
16 0.06409    0.06260212
17 0.00000    0.00000000
18 0.05668    0.05654555
19 0.00000    0.00000000
20 0.05180    0.05100393
21 0.00000    0.00000000
22 0.04570    0.04598101
23 0.00000    0.00000000
24 0.04078    0.04144437
25 0.00000    0.00000000
26 0.03749    0.03735245
27 0.00000    0.00000000
28 0.03241    0.03366354
29 0.00000    0.00000000
30 0.03026    0.03033861


Finally, the mean of the steps needed in the simulated games also matches the calculated expectation quite well:

mean(steps_minimum)
[1] 25.43862