# Markov bound for super-martingales

For any non-negative super-martingale, the probability that its maximum ever exceeds a given value is at most .

The Markov bound plays a fundamental role in the following sense: many probabilistic proofs, including, for example, the proof of the Chernoff bound, rely ultimately on the Markov bound. This note discusses a bound that plays a role similar to the Markov bound in a particular important scenario: when analyzing the maximum value achieved by a given non-negative super-martingale.

Here’s a simple example. Alice goes to the casino with $1. At the casino, she plays the following game repeatedly: she bets half her current balance on a fair coin flip. (For example, on the first flip, she bets 50 cents, so she wins 50 cents with probability 1/2 and loses 50 cents with probability 1/2.) Will Alice’s winnings ever reach$10 or more? The bound here says this happens with probability at most 1/10.

# Markov bound for super-martingale maxima

Let be a non-negative super-martingale — a sequence of non-negative random variables that is non-increasing in expectation: , and for each . The sequence may be finite or infinite.

Consider the event that , for some given . To bound the probability of this event, if we have a bound on the expectation of we can use the Markov bound. For example, in the ideal case, if it happens that is at most , then the Markov bound implies that the event in question happens with probability at most . Although can be much larger than , the desired bound in any case:

Lemma (Markov for super-martingale maxima).

Fix any
(a) .
(b) .

In short, this bound substitutes for the Markov bound to give us a natural bound on the probability of the event . Note that in most applications will be a fixed value independent of the outcome.

# Proof idea

For our purposes, knowing how to use this bound is more important than knowing how to prove it. Here is the proof just for the sake of completeness.

To get the intuition, consider the following seemingly weaker bound. If is any stopping time with finite expectation, then by Wald’s equation is at most , so by Markov is at most . That is, the desired bound holds for the single value .

The proof of the lemma uses this argument, with specifically defined to be the first time such that (if any, else ). (In the example, this is analogous to Alice quitting as soon as her winnings reach \$.) This is indeed a stopping time, and, crucially, the event occurs only if . So the bound on from the previous paragraph implies the result. A technical obstacle is that might not have finite expectation, but this is easily overcome via a limit argument.

# Pessimistic estimator

(a) Given the value at the current time , as long as the bad event has not yet happened (that is, as long as ), the value is a pessimistic estimator for the conditional probability of the event . The value is initially , it is non-increasing in expectation with each step, and, as long as it remains less than 1, the event in question doesn’t happen.

(b) The same is a pessimistic estimator for the event in part (b): the value is initially , it is non-increasing in expectation with each step, and, as long as it remains less than or equal to 1, the event in question doesn’t happen.