# Lagrangian relaxation / example

A simple example of a Lagrangian-relaxation algorithm.

The algorithm is for Maximum Multicommodity Flow. It illustrates some prototypical aspects of Lagrangian relaxation.

# Algorithm

To ease presentation, assume that each edge capacity is at least 1. (If not, divide all edge capacities by the minimum edge capacity before running the algorithm.) The algorithm finds a near-maximum multicommodity flow.

 input: graph with edge capacities; collection of paths in output: multicommodity flow 1. Let for each . 2. Repeat until some edge has at least units of flow through it. 3. Send units of flow along a path , where minimizes . (Typically a shortest-path algorithm is used to find .) 4. Let . 5. Return where for . …Scale to make it feasible.

# Performance guarantee

In the 1990s, the following approximation guarantee was proved for algorithms similar to this one:

Theorem (e.g. [5, 2, 3, 4, 6, 1]).

The algorithm above returns a flow of size at least times maximum.

The potential in the proof combines all the hard edge-capacity constraints into a single smooth function. This is the hallmark of a Lagrangian-relaxation algorithm.

# Width-based bound on the convergence rate

Historically, Lagrangian-relaxation algorithms have generally been used heuristically, without proven bounds on their worst-case convergence rates. One of the main contributions of research since the 1990s has been to prove tight worst-case upper bounds on convergence rates, and to find potential functions and step sizes that lead to faster convergence rates.

Lemma.

The algorithm above makes at most iterations.

Proof.

The algorithm scales down by roughly at the end. The resulting flow is feasible, so has value at most . It follows that before scaling has value at most about . Since each iteration increases by , it follows that there are at most iterations.

It is not hard to see that this upper bound is tight for this algorithm on all inputs. (It stops when the maximum edge congestion is about , so that scaling down by gives a flow of value about .)

In general, the previous approach gives bounds on the number of iterations that depend on the so-called width of the problem instance. In this case the width is , which is at most . In some problems of interest the width can be as small as , but in general the width can be large.

# Width-independent bounds on convergence rates

To reduce the number of iterations, modify the algorithm as follows (following Garg and Könemann ). In each iteration, instead of sending units of flow, send units, where is the chosen path. It is not hard to modify the proof to show that the approximation ratio continues to hold.

Lemma.

The modified algorithm does at most iterations.

Proof.

In each iteration, at least one edge has its flow increase by . Thus, within iterations, at least one of the edges must have flow at least , and the algorithm will terminate.

The bound of iterations also holds, because each iteration increases by at least .

# Exercise: fractional set cover

Given a fractional set cover , let denote the coverage of any element . Consider the following algorithm for finding an approximately maximum-size fractional set cover:

Start with for each set , then, in each iteration add to where maximizes . Stop when every element is covered by at least a total weight of , and return , scaled by the minimum element coverage.

Use the potential function , a lower bound on the minimum coverage of any element , to show that the algorithm is a -approximation algorithm. (Hint: show that the potential increases by at least in each iteration.)

Modify the algorithm to delete elements as soon as their coverage exceeds . Show that the approximation ratio still holds, and that the resulting algorithm requires iterations, where is the number of elements to be covered.

## Footnotes

1. Here is a full proof of bound (\ref{eq:1}). Let denote the increase in congestion on edge . Using ,
$\exp (f(e)/c_ e+\delta _ e) \, =\, \psi _ e\exp (\delta _ e) \, =\, \psi _ e (1+\delta _ e +O(\delta _ e^2)) \, \le \, \psi _ e (1+(1+O(\varepsilon ))\delta _ e).$

So the increase in is at most

$\textstyle \ln (\sum _ e \psi _ e(1+(1+O(\varepsilon ))\delta _ e)) -\ln (\sum _ e \psi _ e) ~ =~ \displaystyle \ln \frac{\sum _ e \psi _ e(1+(1+O(\varepsilon ))\delta _ e\psi _ e)}{\sum _ e \psi _ e}$
$=~ \ln \Big[1 + (1+O(\varepsilon ))\frac{\sum _ e \delta _ e\psi _ e}{\sum _ e \psi _ e}\Big] ~ \le ~ (1+O(\varepsilon ))\frac{\sum _ e \delta _ e\psi _ e}{\sum _ e \psi _ e}.$