Approximation Algorithms for Distributionally Robust Stochastic Optimization

Loading...
Thumbnail Image

Date

2019-05-15

Authors

Linhares Rodrigues, Andre

Advisor

Swamy, Chaitanya

Journal Title

Journal ISSN

Volume Title

Publisher

University of Waterloo

Abstract

Two-stage stochastic optimization is a widely used framework for modeling uncertainty, where we have a probability distribution over possible realizations of the data, called scenarios, and decisions are taken in two stages: we take first-stage actions knowing only the underlying distribution and before a scenario is realized, and may take additional second-stage recourse actions after a scenario is realized. The goal is typically to minimize the total expected cost. A common criticism levied at this model is that the underlying probability distribution is itself often imprecise. To address this, an approach that is quite versatile and has gained popularity in the stochastic-optimization literature is the two-stage distributionally robust stochastic model: given a collection D of probability distributions, our goal now is to minimize the maximum expected total cost with respect to a distribution in D. There has been almost no prior work however on developing approximation algorithms for distributionally robust problems where the underlying scenario collection is discrete, as is the case with discrete-optimization problems. We provide frameworks for designing approximation algorithms in such settings when the collection D is a ball around a central distribution, defined relative to two notions of distance between probability distributions: Wasserstein metrics (which include the L_1 metric) and the L_infinity metric. Our frameworks yield efficient algorithms even in settings with an exponential number of scenarios, where the central distribution may only be accessed via a sampling oracle. For distributionally robust optimization under a Wasserstein ball, we first show that one can utilize the sample average approximation (SAA) method (solve the distributionally robust problem with an empirical estimate of the central distribution) to reduce the problem to the case where the central distribution has a polynomial-size support, and is represented explicitly. This follows because we argue that a distributionally robust problem can be reduced in a novel way to a standard two-stage stochastic problem with bounded inflation factor, which enables one to use the SAA machinery developed for two-stage stochastic problems. Complementing this, we show how to approximately solve a fractional relaxation of the SAA problem (i.e., the distributionally robust problem obtained by replacing the original central distribution with its empirical estimate). Unlike in two-stage {stochastic, robust} optimization with polynomially many scenarios, this turns out to be quite challenging. We utilize a variant of the ellipsoid method for convex optimization in conjunction with several new ideas to show that the SAA problem can be approximately solved provided that we have an (approximation) algorithm for a certain max-min problem that is akin to, and generalizes, the k-max-min problem (find the worst-case scenario consisting of at most k elements) encountered in two-stage robust optimization. We obtain such an algorithm for various discrete-optimization problems; by complementing this via rounding algorithms that provide local (i.e., per-scenario) approximation guarantees, we obtain the first approximation algorithms for the distributionally robust versions of a variety of discrete-optimization problems including set cover, vertex cover, edge cover, facility location, and Steiner tree, with guarantees that are, except for set cover, within O(1)-factors of the guarantees known for the deterministic version of the problem. For distributionally robust optimization under an L_infinity ball, we consider a fractional relaxation of the problem, and replace its objective function with a proxy function that is pointwise close to the true objective function (within a factor of 2). We then show that we can efficiently compute approximate subgradients of the proxy function, provided that we have an algorithm for the problem of computing the t worst scenarios under a given first-stage decision, given an integer t. We can then approximately minimize the proxy function via a variant of the ellipsoid method, and thus obtain an approximate solution for the fractional relaxation of the distributionally robust problem. Complementing this via rounding algorithms with local guarantees, we obtain approximation algorithms for distributionally robust versions of various covering problems, including set cover, vertex cover, edge cover, and facility location, with guarantees that are within O(1)-factors of the guarantees known for their deterministic versions.

Description

Keywords

approximation algorithms, stochastic optimization, discrete optimization, convex optimization

LC Subject Headings

Citation