# Multicommodity Flow II: width-independence via non-uniform increments

Speeding up the algorithm by increasing the step size.

The previous rounding schemes increase the chosen variable by 1 with each sample. This can be exponentially slow. To speed it up, we modify the rounding scheme (or algorithm) to take larger steps, with the step size varying each iteration. Here we use the idea to speed up the Multicommodity-Flow algorithm.

This specific algorithm comes from Garg and Könemann [1, 2, 3], although the basic idea (called a trust region) is well known.

While we’re at it, we also generalize the algorithm to handle maximum-weight Multicommodity Flow.

# Rounding scheme with non-uniform increments

 input: instance output: integer flow for 0. Compute a maximum-value fractional multicommodity flow . 1. Initialize for every path . Let . 2. Repeat until some edge is saturated 3. For each path , define increment , then choose path randomly from the probability distribution such that (choosing so that ). 4. Increase by . 5. Return .

For each sample, the scheme fixes an increment for each path , then samples a random path and increases the flow on the sampled path by (instead of 1). The sampling distribution is chosen so that the expected increase in each variable is for some .

Lemma (existence).

Let be the number of edges with capacity constraints. For any such that , the flow returned by the rounding scheme is feasible and has expected value at least .

Because the analysis must use a random stopping time, the standard Chernoff bound doesn’t quite give the right proof. We adapt Chernoff to handle the random stopping time. It is possible to adapt it to give a tail bound on the probability of failure, but instead we adapt it to bound the expected flow size directly. This allows us to localize the bound later.

Define

$\psi (\tilde f) ~ =~ v(\tilde f) ~ +~ {\textstyle \frac{\ln (1+\varepsilon )}{\tilde c\, \varepsilon }}\, \Big[\tilde c – \operatorname {smax}_{e\in C} \frac{\tilde f(e) \tilde c}{c_ e} \Big] \, v(f^*)$

where This is a smooth approximation to .

To prove the lemma we show that is a valid pessimistic estimator for the conditional expectation of the final flow size, and has initial value at least .

For the case of unit values , in fact, is the (linearly scaled and translated)1 logarithm of the pessimistic estimator for the algorithm for the “slow” multicommodity flow algorithm. The calculations in this proof are somewhat forbidding, but they are essentially just the calculations underlying that proof in a different guise.

# Algorithm with non-uniform increments

 input: instance output: integer flow for 1. Initialize for every path . Let . 2. Repeat until some edge is saturated 3. Choose path to minimize , where . 4. Increase by . 5. Return .

Next we apply the method of conditional probabilities to derive the algorithm.

We simply reuse the pessimistic estimator (on the conditional expectation of the final flow size) from the existence proof. The algorithm chooses the path in each iteration to keep the pessimistic estimator from decreasing, and so guarantees a successful outcome.

The performance guarantee follows:

Theorem ([1, 2, 3]).

Let be the number of capacity constraints. For any such that , given the graph , the set of paths , and , the algorithm above returns a feasible integer flow whose value is at least , where is any maximum-value feasible flow.

Finally, we bound the number of iterations that the algorithm (and rounding scheme) take.

Theorem ([1, 2, 3]).

The number of iterations taken by the algorithm is at most , where is the “width” of the problem instance.

Proof.

Recall , so the desired bound is .

Each iteration except the last increases the flow on some edge (the bottleneck edge, the one determining the minimum in the definition of ) by , which is at least , so the number of iterations (before some edge is saturated) is at most .

Each iteration except the last increases the total flow by at least , which is at least , so the number of iterations is at most .

## Footnotes

1. Let denote that pessimistic estimator with . Then
$\psi (\tilde f) = \frac{|f^*|}{1+\varepsilon } – \frac{|f^*| \ln (1+\varepsilon )}{\tilde c\, \varepsilon } \log _{1+\varepsilon } \phi (\tilde f).$

For intuition, note that if is a super-martingale, then so is , which would make a sub-martingale. But note that we haven’t in fact shown that is a super-martingale with respect to the rounding scheme with increments.