Lagrangian-relaxation examples

Deriving Lagrangian-relaxation algorithms via randomized rounding.

We systematically derive Lagrangian-relaxation algorithms by starting with randomized-rounding schemes whose analyses use Chernoff bounds, then applying the method of conditional probabilities. The pessimistic estimator gives the potential-function and invariant for the algorithm.



Four introductory examples

This example illustrates applying the method of conditional probabilities to a proof that is based on a standard Chernoff bound. The rounding scheme randomly samples a fixed number (about (1-\varepsilon )|f|) paths from the distribution f/|f| defined by the fractional solution f. With each sample, it adds one unit of flow to the sampled path. By a standard Chernoff bound and the naive union bound, the expected number of over-capacity edges is less than 1. Applying the method of probabilities yields the algorithm. The pessimistic estimator is the sum of exponential penalty terms, one for each edge capacity. (Each exponential term comes from one application of Chernoff.)

To keep things technically simple in this example, we assume each edge has the same capacity c. The algorithm computes a (1-\varepsilon )-approximate maximum integer flow, as long as the edge capacity c is \Omega (\log (m)/\varepsilon ^2). With appropriate scaling of the edge capacity, the number of iterations required is O(m\log (m)/\varepsilon ^2).

The algorithm can also be used to compute a (1-\varepsilon )-approximate fractional flow for arbitrary \varepsilon \in (0,1). We can view this as rounding to finer-than-integer units \delta , where \delta is chosen just small enough to make it probable that the rounding scheme gives a (1-\varepsilon )-approximate solution.

Next we drop the assumption that the edges all have the same capacity. This example illustrates how tweaking the analysis of the rounding scheme and adjusting the pessimistic estimator can lead to a more elegant algorithm.

Assuming every edge capacity c_ e is \Omega (\log (m)/\varepsilon ^2), the algorithm computes a (1-\varepsilon )-approximate maximum integer flow. The algorithm can also be used to compute a (1-\varepsilon )-approximate fractional flow for arbitrary \varepsilon \in (0,1). The number of iterations required (with appropriate scaling of the edge capacities) is O\big ((\sum _ e c_ e/c_{\min }) \log (m)/\varepsilon ^2\big ), which can be exponential.

The iteration bound is comparable to those by Plotkin et al. [2]. This particular algorithm and analysis are from [3].

For problems whose cost functions have weights, the increase in cost with each sample depends on the sample. Also, instead of sampling for a pre-determined number of samples, the most natural rounding scheme stops when some condition is met (such as when some edge reaches capacity). In this case, the standard Chernoff bound doesn’t quite apply, but one can modify the Chernoff bound appropriately in various ways.

In this example the goal is to compute a maximum-value feasible flow, where the value of each unit of flow depends on its commodity. The rounding scheme samples until an edge reaches capacity. With each sample, the increase in flow value depends on the sample. The analysis adapts the Chernoff bound to handle random stopping times.

Assuming each edge capacity is at least 2\ln (m)/\varepsilon ^2, the algorithm computes a (1-\varepsilon )-approximate maximum-value flow. The algorithm can also be used to compute a (1-\varepsilon )-approximate fractional flow for arbitrary \varepsilon \in (0,1). The number of iterations required is O\big ((\sum _ e c_ e/c_{\min })\log (m)/\varepsilon ^2\big ), which can be exponential.

The algorithms in the examples above have a weakness: in general, they can take exponentially many iterations. To fix this, one more idea is needed: Garg and Könemann’s idea of non-uniform increments, that is, increasing the step size maximally in each iteration [1].

This example applies this idea to the previous Maximum Multicommodity Flow algorithm. In each iteration, the modified algorithm increases the flow on the chosen path p by an integer \delta _ p proportional to the minimum capacity of any edge in p. (We can think of this as taking \delta _ p steps at once, all with the same path p.) The modification preserves the performance guarantee with a minor adjustment to the proof. The worst-case number of iterations required is O(m\log (m)/\varepsilon ^2) (with arbitrary edge capacities). The resulting algorithm is essentially equivalent to Garg and Könemann’s [1].

Instead of viewing the algorithm as a modification of the original algorithm, it is possible to derive the modified algorithm directly using randomized rounding. The rounding scheme, instead of incrementing \tilde f_ p by 1 for each sampled path p, increments \tilde f_ p by \delta _ p. The sampling distribution is adjusted accordingly: the probability of sampling a given path p is made proportional to f_ p/\delta _ p (where f is the fractional solution being rounded), instead of f_ p. With this change, the rounding scheme has a random stopping time, and its analysis requires a modified Chernoff bound. We discuss this briefly in the example.

A few more examples

Remark.

To apply this idea to covering problems requires one more technique: dropping covering constraints once they are met, and adjusting the increments accordingly.

Like the example for Multicommodity Flow with uniform edge capacities, this example uses a standard Chernoff bound. The algorithm computes a (1\pm \varepsilon )-approximate (and sparse) mixed strategy for any two-player zero-sum matrix game with non-negative payoffs, given the payoff matrix. The number of iterations required (and the size of the support of the mixed strategy) is O(M \log (m)/\varepsilon ^2). Here M is the ratio of the maximum payoff to the value of the game. The number of iterations can be exponential.

This example also uses a standard Chernoff bound. This main difference is that this example is for a covering problem (unweighted fractional set cover) instead of packing. The algorithm computes a (1+\varepsilon )-approximate minimum-size fractional set cover in O(m\log (m)/\varepsilon ^2) iterations.

This example is similar to the Weighted Multicommodity Flow example, in that it adapts the Chernoff bound to handle random stopping times, to deal with the fact that the objective function has weights. This example is for a covering problem. The number of iterations required to compute a (1+\varepsilon )-approximate minimum-cost fractional set cover is O\big ((\sum _ s c_ s/c_{\min })\log (m)/\varepsilon ^2\big ), which can be exponential.