CMU-CS-11-128Computer Science Department School of Computer Science, Carnegie Mellon University
CMU-CS-11-128
Kanat Tangwongsan August 2011 Ph.D. Thesis
Keywords: NA
Over the past few decades, advances in approximation algorithms have
enabled near-optimal solutions to many important, but computationally hard,
problems to be found efficiently. But despite the resurgence of parallel
computing in recent years, only a small number of these algorithms have been
considered from the standpoint of parallel computation. Furthermore,
among those for which parallel algorithms do exist, the algorithms –
mostly developed in the late 80s and early 90s – follow a design
principle that puts heavy emphasis on achieving polylogarithmic depth,
with little regard to work, generally resulting in algorithms that perform
substantially more work than their sequential counterparts. As a result,
on a modest number of processors, these highly parallel–but "heavy"–algorithms are unlikely to perform competitively with the state-of-the-art
sequential algorithms. îis motivates the question: In this thesis, we explore a set of key algorithmic techniques that facilitate the design, analysis, and implementation of a wide variety of efficient parallel approximation algorithms. This includes:
– Ax = b, with m nonzeros. îe solver leads fast parallel algorithms for max flow, min-cost flow, (spectral) sparsifier, etc.– Probabilistic tree embeddings. An RNC O(n2 log n)-work
algorithm for probabilistic tree embeddings with expected stretch
O(log n), independent of the aspect ratio of the input metric. This
is a parallel version of Fakcharoenphol et. al.'s algorithm, providing a
building block for algorithms for k-median and buy-at-bulk network.– Hierarchical diagonal blocking. A sparse matrix representation
that exploits the small separators property found in many real-world matrices.
We also develop a low-depth parallel algorithm for the representation, which
achieves substantial speedups over existing SpMV code.
200 pages
| |

Return to:
SCS Technical Report Collection This page maintained by reports@cs.cmu.edu |