# Greedy examples

Deriving greedy algorithms via randomized rounding.

Greedy algorithms are often natural, but coming up with exactly the right algorithm or analysis can still be a challenge. Here, the idea is to view these algorithms as derandomized versions of a sample-and-increment rounding scheme. The proof of the performance guarantee starts with the rounding scheme, using standard probabilistic tools. Applying the method of conditional probabilities then yields the algorithm in a systematic way.

# Three introductory examples

This introductory example illustrates the basic idea. The rounding scheme randomly samples a fixed number of sets from the distribution defined by the fractional solution . By direct calculation, the expected number of elements left uncovered is less than 1. Applying the method of conditional probabilities yields Johnson and Lovász’s greedy set-cover algorithm, and a proof that it returns a cover of size at most , where is an optimal fractional cover. More on deriving the greedy set-cover algorithm…

For problems whose cost functions have weights, the increase in cost with each sample depends on the sample. Also, instead of sampling for a pre-determined number of samples, the most natural rounding scheme stops when some condition is met (such as when all elements are covered). The analysis of the rounding scheme becomes a little more subtle. The analysis for the unweighted case can be adapted by using Wald’s equation.

In this example, the rounding scheme samples from the distribution defined by the fractional solution until all elements are covered. We use Wald’s equation to show that the expected number of samples before all elements are covered is at most , so the expected cost of all the samples is at most (the fractional cost times ). Applying the method of conditional probabilities yields Chvátal’s algorithm (which generalizes the greedy set-cover algorithm to the weighted case), and a proof that it returns a -approximation. More on using Wald’s to handle weighted set cover…

Here is the maximum set size. The analyses in the examples above analyze the number of samples necessary to meet all constraints, then multiply that by the cost per sample. With this “global” approach, the approximation ratio inevitably grows as increases, because the number of samples required to meet all constraints grows with (it’s about ). We can avoid this by recasting the analysis to make it “local”.

In this example, we modify the random sampling scheme so that, when a set is sampled, the set is added to the cover only if contains an element that is not yet covered. For the analysis we focus on each set in isolation. Following the same reasoning as in the global analysis, the expected number of samples before all elements in are covered is at most . (And won’t be added to the cover after that.) Thus, the probability that is added is at most . By linearity of expectation the expected cost of the final cover is at most .

Applying the method of conditional probabilities yields Chvátal’s algorithm again (although the pessimistic estimator is more subtle) and a proof that the algorithm returns a cover of cost at most , which is at most , where ranges over the feasible fractional set covers.