# Oblivious randomized rounding Oblivious randomized rounding via sample-and-increment rounding schemes.

Randomized rounding schemes based on random sampling can give better approximate solutions than do standard randomized-rounding schemes. Derandomizing them (via the method of conditional probabilities) yields greedy and Lagrangian-relaxation algorithms in a systematic way. We illustrate this by example.

# Sample-and-increment randomized-rounding schemes

A typical sample-and-increment randomized-rounding scheme has the form below:

 input: Problem instance output: Approximate solution for (w/ positive probability) 1. Solve an LP to get fractional solution . 2. Initialize each . 3. For until some condition is met: 4. Increment , where is randomly sampled from distribution . 5. Return .

(Above is the 1-norm of .)

We are interested in two properties of sample-and-increment schemes:

1. They can give a better quality of approximation than conventional rounding schemes.

For example, for weighted set cover, a sample-and-increment scheme yields expected cost times the cost of the fractional cover, where is the largest set size (matching Chvátal’s greedy algorithm).

2. Derandomizing them using the standard method of conditional probabilities yields greedy and Lagrangian-relaxation algorithms.

Regarding the second point, to derive an algorithm, one first analyzes the rounding scheme to show a desired performance guarantee. A typical guarantee might have the following form:

Lemma.

For any instance , with positive probability, the rounding scheme returns a -approximate solution

The proof will use standard probabilistic methods (linearity of expectation, naive union bounds, Chernoff bounds, etc.)

Next, one applies the standard method of conditional probabilities to the existence proof. The resulting deterministic algorithm is guaranteed to reach a successful outcome, that is, to match the performance guarantee of the rounding scheme (but with probability 1):

Theorem (algorithm).

For any instance , the algorithm returns a -approximate solution .

# Deriving a greedy or Lagrangian-relaxation algorithm

The goal in this setting is to obtain an algorithm of the following form:

 input: Problem instance output: Approximate solution (guaranteed) 1. Solve an LP to get fractional solution . 2. Initialize each . 3. For until some condition is met: 4. Increment , where is chosen so that does not increase. 5. Return .

This form is atypical, in that it skips the first step, solving the linear program. The trick is to find a pessimistic estimator such that the choice of in line 4 can be made without knowing the fractional solution . Starting with a rounding scheme based on random sampling makes this possible. I call this kind of randomized rounding oblivious randomized rounding.

Many existing greedy/Lagrangian-relaxation algorithms can be derived this way. Prime examples are the greedy set-cover algorithm by Johnson , Lovász , and Chvátal , and the Lagrangian-relaxation algorithms of Plotkin, Shmoys, and Tardos  (comparable to algorithms by Grigoriadis and Khachiyan ). The latter compute -approximate solutions to fractional packing/covering LPs.

From this viewpoint, each algorithm follows from an existence proof “in a systematic way”  by applying the method of conditional probabilities. Standard probabilistic methods yield, and abstract, the arguments necessary to prove the performance guarantees of the algorithms (approximate primal-dual pairs, invariants, smooth penalty functions, etc.)

In short, the “probabilistic lens”  helps to organize and standardize the technical ideas underlying the design and analyses of these algorithms.

This collection of notes illustrates the ideas by example (notes on greedy, notes on Lagrangian relaxation). There are also notes on bounds related to random stopping times and other background material.

## More…

p.s. Blogging in LaTeX

I write these notes in LaTeX, then use plasTeX to translate them to HTML and MathJax. More on that here.