CMU-CS-07-172
Computer Science Department
School of Computer Science, Carnegie Mellon University



CMU-CS-07-172

Using Online Algorithms to Solve NP-Hard
Problems More Efficiently in Practice

Matthew Streeter

December 2007

Ph.D. Thesis

CMU-CS-07-172.pdf


Keywords: Online algoritums, submodular functions, algorithm portfolios, restart schedules, task-switching schedules, multi-armed bandit problems

This thesis develops online algorithms that can be used to solve a wide variety of NP-hard problems more efficiently in practice. The common approach taken by all our online algorithms is to improve the performance of one or more existing algorithms for a specific NP-hard problem by adapting the algorithms to the sequence of problem instance(s) they are run on.

We begin by presenting an algorithm for solving a specific class of online resource allocation problems. Our online algorithm can be applied in environments where abstract jobs arrive one at a time, and one can complete the jobs by investing time in a number of abstract activities. Provided the jobs and activities satisfy certain technical conditions, our online algorithm is guaranteed to perform almost as well as any fixed schedule for investing time in the various activities, according to two natural measures of performance: (i) the average time required to complete each job, and (ii) the number of jobs completed within time T, for some fixed deadline T > 0.

In particular, our online algorithm's guarantees apply if the job can be written as a monotone, submodular function of a set of pairs of the form (v,τ), where τ is the time invested in activity v. Under the first objective, the offline version of this problem generalizes MIN-SUM SET COVER and the related PIPELINED SET COVER problem. Under the second objective, the offline version of this problem generalizes the problem of maximizing a monotone, submodular set function subject to a knapsack constraint. Our online algorithm has potential applications in a number of areas, including the design of algorithm portfolios, database query processing, and sensor placement.

We apply this online algorithm to the following problem. We are given k algorithms, and are fed, one at a time, a sequence of problem instances to solve. We may solve each instance using any of the k algorithms, we may interleave the execution of the algorithms, and, if the algorithms are randomized, we may periodically restart them with a fresh random seed. Our goal is to minimize the total CPU time required to solve all the instances. Using data from eight recent solver competitions, we show that our online algorithm and its offline counterpart can be used to improve the performance of state-of-theart solvers in a number of problem domains, including Boolean satisfiability, zero-one integer programming, constraint satisfaction, and theorem proving.

We next present an online algorithm that can be used to improve the performance of algorithms that solve an optimization problem by making a sequence of calls to a decision procedure that answers questions of the form "Is there a solution of cost at most k?" We present an adaptive strategy for determining the sequence of questions to ask, along with bounds on the maximum time to spend waiting for an answer to each question. Under the assumption that the time required by the decision procedure to return an answer increases as k gets closer to the optimal solution cost, our strategy's performance is near-optimal when measured in terms of a natural competitive ratio. Experimentally, we show that applying our strategy to recent algorithms for A.I. planning and job shop scheduling allows the algorithms to find approximately optimal solutions more quickly.

Lastly, we develop algorithms for solving the max k-armed bandit problem, a variant of the classical k-armed bandit problem in which one seeks to maximize the highest payoff received on any single trial, rather than the cumulative payoff. A strategy for solving the max k-armed bandit problem can be used to allocate trials among multi-start optimization heuristics. Motivated by results in extreme value theory, we present a no-regret strategy for the special case in which each arm returns payoffs drawn from a generalized extreme value distribution. We also present a heuristic strategy that solves the max k-armed bandit problem using a strategy for the classical k-armed bandit problem as a subroutine. Experimentally, we show that our max k-armed bandit strategy can be used to effectively allocate trials among multi-start heuristics for the RCPSP/max, a difficult real-world scheduling problem.

205 pages


Return to: SCS Technical Report Collection
School of Computer Science

This page maintained by reports@cs.cmu.edu