Oblivious randomized rounding via sampleandincrement rounding schemes.
Randomized rounding schemes based on random sampling can give better approximate solutions than do standard randomizedrounding schemes. Derandomizing them (via the method of conditional probabilities) yields greedy and Lagrangianrelaxation algorithms in a systematic way. We illustrate this by example.
Sampleandincrement randomizedrounding schemes
A typical sampleandincrement randomizedrounding scheme has the form below:
input: Problem instance {\cal I}  
output: Approximate solution for {\cal I} (w/ positive probability)  
1.  Solve an LP to get fractional solution x^*. 
2.  Initialize each \tilde x_ j = 0. 
3.  For t=1,2,\ldots until some condition is met: 
4.  Increment \tilde x_ j, where j is randomly sampled from distribution x^*/x^*. 
5.  Return \tilde x. 
(Above x^* = \sum _ j x^*_ j is the 1norm of x^*.)
We are interested in two properties of sampleandincrement schemes:

They can give a better quality of approximation than conventional rounding schemes.
For example, for weighted set cover, a sampleandincrement scheme yields expected cost {\rm H}(d) times the cost of the fractional cover, where d is the largest set size (matching Chvátal’s greedy algorithm).

Derandomizing them using the standard method of conditional probabilities yields greedy and Lagrangianrelaxation algorithms.
Regarding the second point, to derive an algorithm, one first analyzes the rounding scheme to show a desired performance guarantee. A typical guarantee might have the following form:
For any instance {\cal I}, with positive probability, the rounding scheme returns a capproximate solution \tilde x\ldots
The proof will use standard probabilistic methods (linearity of expectation, naive union bounds, Chernoff bounds, etc.)
Next, one applies the standard method of conditional probabilities to the existence proof. The resulting deterministic algorithm is guaranteed to reach a successful outcome, that is, to match the performance guarantee of the rounding scheme (but with probability 1):
For any instance {\cal I}, the algorithm returns a capproximate solution \tilde x\ldots .
Deriving a greedy or Lagrangianrelaxation algorithm
The goal in this setting is to obtain an algorithm of the following form:
input: Problem instance {\cal I}  
output: Approximate solution (guaranteed)  
1.  
2.  Initialize each \tilde x_ j = 0. 
3.  For t=1,2,\ldots until some condition is met: 
4.  Increment \tilde x_ j, where j is chosen so that \phi (\tilde x) does not increase. 
5.  Return \tilde x. 
This form is atypical, in that it skips the first step, solving the linear program. The trick is to find a pessimistic estimator \phi such that the choice of j in line 4 can be made without knowing the fractional solution x^*. Starting with a rounding scheme based on random sampling makes this possible. I call this kind of randomized rounding oblivious randomized rounding.
Many existing greedy/Lagrangianrelaxation algorithms can be derived this way. Prime examples are the greedy setcover algorithm by Johnson [4], Lovász [5], and Chvátal [2], and the Lagrangianrelaxation algorithms of Plotkin, Shmoys, and Tardos [6] (comparable to algorithms by Grigoriadis and Khachiyan [3]). The latter compute (1+\varepsilon )approximate solutions to fractional packing/covering LPs.
From this viewpoint, each algorithm follows from an existence proof “in a systematic way” [7] by applying the method of conditional probabilities. Standard probabilistic methods yield, and abstract, the arguments necessary to prove the performance guarantees of the algorithms (approximate primaldual pairs, invariants, smooth penalty functions, etc.)
In short, the “probabilistic lens” [1] helps to organize and standardize the technical ideas underlying the design and analyses of these algorithms.
This collection of notes illustrates the ideas by example (notes on greedy, notes on Lagrangian relaxation). There are also notes on bounds related to random stopping times and other background material.