Speeding up the algorithm by increasing the step size.
The previous rounding schemes increase the chosen variable \tilde x_ j by 1 with each sample. This can be exponentially slow. To speed it up, we modify the rounding scheme (or algorithm) to take larger steps, with the step size varying each iteration. Here we use the idea to speed up the Multicommodity-Flow algorithm.
This specific algorithm comes from Garg and Könemann [1, 2, 3], although the basic idea (called a trust region) is well known.
While we’re at it, we also generalize the algorithm to handle maximum-weight Multicommodity Flow.
Rounding scheme with non-uniform increments
input: instance {\cal I}=(\mbox{graph } G, \mbox{set of paths } P, \mbox{capacities } (C,c), \mbox{values } v) | |
output: integer flow \tilde f for {\cal I} | |
0. | Compute a maximum-value fractional multicommodity flow f^*. |
1. | Initialize \tilde{f_ p}=0 for every path p\in P. Let \tilde c = 3(1+\varepsilon )\ln (m)/\varepsilon ^2. |
2. | Repeat until some edge is saturated (\exists e\in C.~ \tilde f(e) = c_ e): |
3. | For each path p\in P, define increment \delta _ p = \lfloor \min _{e\in C} \min (c_ e-\tilde f(e), c_ e/\tilde c) \rfloor , then choose path p\in P randomly from the probability distribution q such that q_ p = \alpha \, f^*_ p / \delta _ p (choosing \alpha so that |q|=1). |
4. | Increase \tilde{f_ p} by \delta _ p. |
5. | Return \tilde f. |
For each sample, the scheme fixes an increment \delta _ p\gt 0 for each path p, then samples a random path p and increases the flow on the sampled path \tilde{f_ p} by \delta _ p (instead of 1). The sampling distribution is chosen so that the expected increase in each variable \tilde{f_ p} is \alpha \, f^*_ p for some \alpha .
Let m be the number of edges with capacity constraints. For any \varepsilon \in (0,1) such that \min _ e c_ e \ge \tilde c = 3(1+\varepsilon )\ln (m)/\varepsilon ^2, the flow \tilde f returned by the rounding scheme is feasible and has expected value at least v(f^*)/(1+\varepsilon ).
Because the analysis must use a random stopping time, the standard Chernoff bound doesn’t quite give the right proof. We adapt Chernoff to handle the random stopping time. It is possible to adapt it to give a tail bound on the probability of failure, but instead we adapt it to bound the expected flow size directly. This allows us to localize the bound later.
Define
where \operatorname {smax}_{e\in C} z_ e ~ =~ \log _{1+\varepsilon }{\textstyle \sum _{e\in C}} (1+\varepsilon )^{z_ e}. This is a smooth approximation to \max _{e\in C} z_ e.
To prove the lemma we show that \psi (\tilde f) is a valid pessimistic estimator for the conditional expectation of the final flow size, and has initial value at least |f^*|/(1+\varepsilon ).
For the case of unit values (v_ p=1), in fact, \psi (\tilde f) is the (linearly scaled and translated)^{1} logarithm of the pessimistic estimator \phi for the algorithm for the “slow” multicommodity flow algorithm. The calculations in this proof are somewhat forbidding, but they are essentially just the calculations underlying that proof in a different guise.
Algorithm with non-uniform increments
input: instance {\cal I}=(\mbox{graph } G, \mbox{set of paths } P, \mbox{capacities } c, \mbox{values } v) | |
output: integer flow \tilde f for {\cal I} | |
1. | Initialize \tilde{f_ p}=0 for every path p\in P. Let \tilde c = 3(1+\varepsilon )\ln (m)/\varepsilon ^2. |
2. | Repeat until some edge is saturated (\exists e\in C.~ \tilde f(e) = c_ e): |
3. | Choose path p\in P to minimize \sum _{e\in p} w_ e/(v_ pc_ e), where w_ e = (1+\varepsilon )^{\tilde f(e) \tilde c / c_ e}. |
4. | Increase \tilde{f_ p} by \delta _ p = \lfloor \min _{e\in C} \min (c_ e-\tilde f(e), c_ e/\tilde c) \rfloor . |
5. | Return \tilde f. |
Next we apply the method of conditional probabilities to derive the algorithm.
We simply reuse the pessimistic estimator \psi (\tilde f) (on the conditional expectation of the final flow size) from the existence proof. The algorithm chooses the path p in each iteration to keep the pessimistic estimator from decreasing, and so guarantees a successful outcome.
The performance guarantee follows:
Let m=|C| be the number of capacity constraints. For any \varepsilon \in (0,1) such that \min _{e\in C} c_ e \ge 3(1+\varepsilon )\ln (m)/\varepsilon ^2, given the graph G, the set of paths P, and \varepsilon , the algorithm above returns a feasible integer flow \tilde f whose value is at least v(f^*) / (1+\varepsilon ), where f^* is any maximum-value feasible flow.
Finally, we bound the number of iterations that the algorithm (and rounding scheme) take.
The number of iterations taken by the algorithm is at most 1+6 \min (m, \rho ) (1+\varepsilon )\ln (m)/\varepsilon ^2, where \rho = |f^*|/\min _{e\in C} c_ e is the “width” of the problem instance.
Recall \tilde c = 3(1+\varepsilon )\ln (m)/\varepsilon ^2, so the desired bound is 2\min (m, \rho ) \tilde c.
Each iteration except the last increases the flow on some edge e\in p (the bottleneck edge, the one determining the minimum in the definition of \delta _ p) by \lfloor c_ e/\tilde c \rfloor , which is at least 0.5 c_ e/\tilde c, so the number of iterations (before some edge is saturated) is at most 2m \tilde c.
Each iteration except the last increases the total flow |\tilde f| by at least \lfloor \min _{e\in C} c_ e/\tilde c \rfloor , which is at least 0.5 \min _{e\in C} c_ e/\tilde c, so the number of iterations is at most 2 |\tilde f| \tilde c / \min _{e\in C} c_ e \le 2\rho \tilde c.
Bibliography
Footnotes
- Let \phi (\tilde f) denote that pessimistic estimator with T=|f^*|/(1+\varepsilon ). Then
\[ \psi (\tilde f) = \frac{|f^*|}{1+\varepsilon } – \frac{|f^*| \ln (1+\varepsilon )}{\tilde c\, \varepsilon } \log _{1+\varepsilon } \phi (\tilde f). \]
For intuition, note that if \phi (\tilde f) is a super-martingale, then so is \log _{1+\varepsilon } \phi (\tilde f), which would make \psi (\tilde f) a sub-martingale. But note that we haven’t in fact shown that \phi (\tilde f) is a super-martingale with respect to the rounding scheme with increments.