Tre P. answered 21h
Test-Prep Specialist (LSAT, SAT, GED) • AP/College Math & Science
Consider the gambler’s capital after each game as a random walk. Let P(k) be the probability that A will eventually be ruined given that he currently has k dollars. We know
• P(0) = 1, because once A has no money he is already ruined.
• For k ≥ 1, after one game his capital is k+1 with probability p (he wins $1) and k-1 with probability q=1‑p (he loses $1). Conditioning on the first game gives a difference equation
P(k) = p·P(k+1) + q·P(k‑1).
To solve, we look for a solution of the form P(k) = C·t^k + D, where t = q/p (with t =1 when p = q = 1/2). Substituting into the recurrence shows this is indeed the general solution.
**Case 1: t > 1 (that is p ≤ q).** Here the random walk drifts downwards. Because t > 1, the term C·t^k would grow without bound as k increases. Since ruin probabilities are bounded by 1, we must take C = 0. Using the boundary condition P(0)=1 gives D = 1. Thus P(k)=1 for every starting capital. In words, if A is not as likely to win as to lose (p≤5⁄½), the walk is transient to the left, and no matter how much capital A starts with he will eventually reach 0 with probability 1.
**Case 2: t = 1 (p=q=1/2).** When the games are fair the recurrence reduces to P(k) = 1/2(P(k+1)+P(k‑1)). The general solution is linear: P(k) = C k + D. To keep the probabilities bounded as k→∞ (there is no upper boundary), we must set C=0. Using P(0)=1 again forces D=1. Hence P(k)=1 for all k. A symmetric random walk in one dimension is recurrent, so with probability 1 it will eventually hit 0.
**Case 3: t < 1 (p > q).** When A’s chance of winning each game is larger than B’s, the random walk has a positive drift. The t^k term now decays to zero as k increases, so we let D=0. Applying the boundary condition P(0)=1 determines C=1. Therefore P(k)=t^k. Since t=q/p<1, the probability of eventual ruin decreases geometrically in the starting capital. For example, if p=2/3 and q=1/3 then t=1/2 and a player beginning with $M has ruin probability (1/2)^M.
Putting these cases together we obtain the result you were asked to show:
\[
P(\text{ruin})=\begin{cases}
1,&\text{if }t=\frac{q}{p}\ge 1\;(p\le \frac{1}{2}),\\[6pt]
\left(\frac{q}{p}\right)^M,&\text{if }t=\frac{q}{p}<1\;(p>\frac{1}{2}).
\end{cases}
\]
(The middle case p=q also yields 1.) This formula captures the intuition that if A wins less often than B, ruin is certain, whereas if A wins more often his probability of ruin decays exponentially with his initial capital.