Misc. background material

Misc. background material.

 

Excerpts

Probabilistic method

A brief introduction to the probabilistic method, followed by some basic bounds and a few examples.

Alon, Spencer, and Erdős (1992) describe the method as follows:

In order to prove the existence of a combinatorial structure with certain properties, we construct an appropriate probability space and show that a randomly chosen element in the space has the desired properties with positive probability. [1]

The method has come into wide use since about 1950, even in areas of mathematics that, a-priori, have nothing to do with probability. This is roughly because probability provides a useful high-level conceptual framework for understanding (formalizing, organizing, communicating) many widely applicable counting and averaging arguments.

More…

basic bounds

Existence proofs, the naive union bound, linearity of expectation, Markov bound.

MathJax is loading, equations are not yet displayed...

More…

examples (Max Cut, Turán’s theorem)

Examples: 2-approximation for Max Cut; Turán’s theorem

MathJax is loading, equations are not yet displayed...

More…

method of conditional probabilities (Max Cut)

The method of conditional probabilities converts a probabilistic existence proof into a deterministic algorithm.

The method of conditional probabilities is a systematic method for converting non-constructive probabilistic existence proofs into efficient deterministic algorithms that explicitly construct the desired object.

MathJax is loading, equations are not yet displayed...

More…

pessimistic estimators (Turáns theorem)

Pessimistic estimators in the method of conditional probabilities.

In applying the method of conditional probabilities, exact conditional probabilities (or expectations) are sometimes hard to compute. Pessimistic estimators can be used instead. We illustrate the idea by example, using Turán’s theorem.

MathJax is loading, equations are not yet displayed...

More…

Chernoff bound

A brief introduction to Chernoff bounds.

If you’re already familiar with Chernoff bounds, you may prefer to skip directly to the statement and proof of a typical Chernoff bound.

Chernoff bounds (a.k.a. tail bounds, Hoeffding/Azuma/Talagrand inequalities, the method of bounded differences, etc. [1, 2]) are used to bound the probability that some function (typically a sum) of many “small” random variables falls in the tail of its distribution (far from its expectation).

MathJax is loading, equations are not yet displayed...

More…

Chernoff bound proof

The statement and proof of a typical Chernoff bound.

 

MathJax is loading, equations are not yet displayed...

More…

Bounds related to random stopping times

Links to notes on bounds related to random stopping times

Wald’s equation

Wald’s equation, a form of linearity of expectation for sums with randomly many terms.

Consider a sum \sum _{t=1}^ T x_ t of random variables, where the number of terms T is itself a random variable. If each term x_ t has expectation at most (or at least) \mu , then the expectation of the sum is at most (or at least) \mu \, \textrm{E}[T] (the bound on the expectation of each term, times the expected number of terms).

More…

Wald’s equation

Wald’s equation, a form of linearity of expectation for sums with randomly many terms.

Consider a sum \sum _{t=1}^ T x_ t of random variables, where the number of terms T is itself a random variable. If each term x_ t has expectation at most (or at least) \mu , then the expectation of the sum is at most (or at least) \mu \, \textrm{E}[T] (the bound on the expectation of each term, times the expected number of terms). This holds provided the random variables in the sum are bounded above or below and T is a stopping time with finite expectation.

More…

Wald’s equation for dependent increments

A variant of Wald’s equation to use when the expected increment depends on the sum so far.

Wald’s equation applies to any sequence that, with each step, increases (or decreases) in expectation by a constant additive amount. What about sequences where the expected change with each step depends on the current value? For example, suppose Alice starts with n coins. In each round t=1,2,\ldots ,T, she flips each remaining coin and discards those that come up tails. She stops once all coins are discarded. What is the expected number of rounds?

MathJax is loading, equations are not yet displayed...

More…

Markov bound for super-martingales

For any non-negative super-martingale, the probability that it ever exceeds its initial value by a factor of c is at most 1/c.

Alice goes to the casino with $1. At the casino, she plays the following game repeatedly: she bets half her current balance on a fair coin flip. (For example, on the first flip, she bets 50 cents, so she wins 50 cents with probability 1/2 and loses 50 cents with probability 1/2.) Will Alice’s winnings ever reach $10 or more? The bound here says this happens with probability at most 1/10.

MathJax is loading, equations are not yet displayed...

More…

“Stopping-time” Chernoff bounds

Extending the Chernoff bound to handle sums with randomly many terms.

Alice goes to the casino and plays bets on a sequence of fair coin flips. On the tth bet, Alice chooses an amount a_ t\in [0,1] to bet: she wins a_ t if this flip is heads, otherwise she loses a_ t. Since it’s Vegas, she never stops playing. Fix any \varepsilon \gt 0. Let W_ t be the sum of bets won after the tth bet. Let L_ t be the sum of bets lost after the tth bet.

More…

Expected maximum (or minimum) of many sums

Bounds on the expected maximum (or minimum) among a collection of sums.

It can be technically convenient to work with expectations directly, instead of working with probabilities. Here, given a collection of sums of 0/1 random variables, we bound the expected maximum (or minimum) sum in the collection.

For example, suppose Alice throws balls randomly into n bins just until the first bin has n balls. The bound says that the expected maximum number of balls in any bin will be at most n+2\sqrt {n\ln n}+\ln n. Similarly, the expected minimum number of balls in any bin will be at least n-2\sqrt {n\ln n}.

More…

Expected deviation of a sum

Bounds on the expected deviation of a sum from a threshold.

Here are bounds on the expected deviation of a sum of 0/1-random variables above or below some threshold (typically near its mean).

For example, suppose Alice flips a fair coin n times. She pays Bob $1 for each head after the first (1+\varepsilon )n/2 heads (if any). What is her expected payment to Bob? The bounds here say: at most \varepsilon ^{-1} \exp (-\varepsilon ^2 n/6). For example, if \varepsilon =\Omega (1/\sqrt n), the expected payment is O(\sqrt n).

More…

Linear programming

Notes on Linear Programming and Integer Linear Programming.

MathJax is loading, equations are not yet displayed...

More…

modeling Set Cover and Multicommodity Flow

Set Cover and Multicommodity flow as (integer) linear programs.

Modeling a problem as a linear program or integer linear program is a basic skill. Here are two examples.

MathJax is loading, equations are not yet displayed...

More…

rounding an LP relaxation

A simple example of computing an approximate solution by rounding the solution to a linear-program relaxation.

The basic paradigm:

  1. Model your problem as an integer linear program.

  2. Solve its linear program relaxation.

  3. Somehow round the solution x^* of the relaxed problem to get a solution \tilde x of the original problem.

The rounding step is typically most easily done with a so-called randomized rounding scheme.

Rounding a relaxation is one of two standard ways to use linear-programming relaxations to design approximation algorithms. (The other way is the primal-dual method.)

MathJax is loading, equations are not yet displayed...

More…

Randomized rounding

Randomly rounding a fractional LP solution to an integer solution.

The idea, introduced by Raghavan and Thompson in 1987, is to use the probabilistic method to round the solution of a linear program, converting it into an approximately optimal integer solution [3]. It’s a broadly useful technique. For many problems, randomized rounding yields algorithms with optimal approximation ratios (assuming P\neq NP). This note describes randomized rounding and gives a few examples.

MathJax is loading, equations are not yet displayed...

More…

Set Cover / greedy algorithm

A brief review of the greedy algorithm for the Set-Cover problem.

 

MathJax is loading, equations are not yet displayed...

More…

Set Cover / Wolsey’s generalization

Wolsey’s generalization of the greedy Set-Cover algorithm to a large class of problems.

It is natural to ask what general classes of problems the greedy Set-Cover algorithm generalizes to. Here we describe one such class, due to Wolsey (1982), that captures many, but not all, such problems.

MathJax is loading, equations are not yet displayed...

More…

Lagrangian relaxation / example

A simple example of a Lagrangian-relaxation algorithm.

The algorithm is for Maximum Multicommodity Flow. It illustrates some prototypical aspects of Lagrangian relaxation.

MathJax is loading, equations are not yet displayed...

More…

Lagrangian relaxation / discussion

A brief yet long-winded discussion of Lagrangian-relaxation algorithms.

 

MathJax is loading, equations are not yet displayed...

More…

MathJax is loading, equations are not yet displayed...