ACM DL

Journal of the ACM (JACM)

Menu
Latest Articles

Efficient Computation of Representative Families with Applications in Parameterized and Exact Algorithms

Let M=(E, I) be a matroid and let S={S1, ċ , St} be a family of... (more)

Optimal Mechanisms for Combinatorial Auctions and Combinatorial Public Projects via Convex Rounding

We design the first truthful-in-expectation, constant-factor approximation mechanisms for NP-hard... (more)

Are Lock-Free Concurrent Algorithms Practically Wait-Free?

Lock-free concurrent algorithms guarantee that some concurrent operation will always make progress in a finite number of steps. Yet programmers prefer... (more)

Optimal Rate Code Constructions for Computationally Simple Channels

We consider coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors... (more)

The Complexity of Finite-Valued CSPs

We study the computational complexity of exact minimization of rational-valued discrete functions. Let Γ be a set of rational-valued functions on a fixed finite domain; such a set is called a finite-valued constraint language. The valued constraint satisfaction problem, VCSP(Γ), is the problem of minimizing a function given as a sum of... (more)

NEWS

Important Note on P/NP: Some submissions purport to solve a long-standing open problem in complexity theory, such as the P/NP problem. Many of these turn out to be mistaken, and such submissions tax JACM volunteer editors and reviewers. JACM remains open to the possibility of eventual resolution of P/NP and related questions, and continues to welcome submissions on the subject. However, to mitigate the burden of repeated resubmissions due to incremental corrections of errors identified during editorial review, no author may submit more than one such paper to JACM, ACM Trans. on Algorithms, or ACM Trans. on Computation in any 24-month period, except by invitation of the Editor-in-Chief. This applies to resubmissions of previously rejected manuscripts. Please consider this policy before submitting a such a paper.

About JACM

The Journal of the ACM (JACM) provides coverage of the most significant work on principles of computer science, broadly construed. The scope of research we cover encompasses contributions of lasting value to any area of computer science. To be accepted, a paper must be judged to be truly outstanding in its field.  JACM is interested  in work in core computer science and at the boundaries, both the boundaries of subdisciplines of computer science and the boundaries between computer science and other fields.

read more
Forthcoming Articles
Analysis of a Classical Matrix Preconditioning Algorithm

We study a classical iterative algorithm for balancing matrices in the L norm via a scaling transformation. This algorithm, which goes back to Osborne and Parlett & Reinsch in the 1960s, is implemented as a standard preconditioner in many numerical linear algebra packages. Surprisingly, despite its widespread use over several decades, no bounds were known on its rate of convergence. In this paper we prove that, for any irreducible n×n (real or complex) input matrix A, a natural variant of the algorithm converges in O(n3 log(nÁ/µ)) elementary balancing operations, where Á measures the initial imbalance of A and µ is the target imbalance of the output matrix. (The imbalance of A is maxi |log(aiout/aiin)|, where aiout, aiin are the maximum entries in magnitude in the i'th row and column respectively.) This bound is tight up to the log n factor. A balancing operation scales the i'th row and column so that their maximum entries are equal, and requires O(m/n) arithmetic operations on average, where m is the number of non-zero elements in A. Thus the running time of the iterative algorithm is O~(n2m). This is the first time bound of any kind on any variant of the Osborne-Parlett-Reinsch algorithm. We also prove a conjecture of Chen that characterizes those matrices for which the limit of the balancing process is independent of the order in which balancing operations are performed.

A nearly optimal algorithm for deciding connectivity queries in smooth and bounded real algebraic sets

A roadmap for a semi-algebraic set $S$ is a curve which has a non-empty and connected intersection with all connected components of $S$. Hence, this kind of object, introduced by Canny, can be used to answer connectivity queries (with applications, for instance, to motion planning) but has also become of central importance in effective real algebraic geometry, since it is used in many higher-level algorithms. For a long time, the best known complexity result for computing roadmaps, due to Basu, Pollack and Roy, was $s^{d+1} D^{O(n^2)}$, where the input is given by $s$ polynomials of degree $D$ in $n$ variables, with $d \le n$ the dimension of an associated geometric object. In 2011, we introduced new proof techniques for establishing connectivity results in real algebraic sets. This gave us more freedom for the design of algorithms computing roadmaps and led us to a first probabilistic roadmap algorithm for smooth and bounded real hypersurfaces running in time $(nD)^{O(n^{1.5})}$. With Basu and Roy, we then obtained a deterministic algorithm for general real algebraic sets running in time $D^{O(n^{1.5})}$. Recently, Basu and Roy improved this result to obtain an algorithm computing a roadmap of degree polynomial in $n^{n\log^2(n)} D^{n\log(n)}$, in time polynomial in $n^{n\log^3(n)} D^{n\log^2(n)}$; this is close to the expected optimal $D^n$. In this paper, we provide a probabilistic algorithm which computes roadmaps for smooth and bounded real algebraic sets such that the output size and the running time are polynomial in $(nD)^{n\log(n)}$. More precisely, the running time of the algorithm is essentially subquadratic in the output size. Even under these extra assumptions, it is the first roadmap algorithm with output size and running time polynomial in $(nD)^{n\log(n)}$.

How to Delegate Computations: The Power of No-Signaling Proofs

We construct a 1-round delegation scheme (i.e., argument-system) for every language computable in time t=t(n), where the running time of the prover is poly(t) and the running time of the verifier is n*polylog(t). In particular, for every language in P we obtain a delegation scheme with almost linear time verification. Our construction relies on the existence of a computational sub-exponentially secure private information retrieval (PIR) scheme. The proof exploits a curious connection between the problem of computation delegation and the model of multi-prover interactive proofs that are sound against no-signaling (cheating) strategies, a model that was studied in the context of multi-prover interactive proofs with provers that share quantum entanglement, and is motivated by the physical principle that information cannot travel faster than light. For any language computable in time t=t(n), we construct a multi-prover interactive proof (MIP) that is sound against no-signaling strategies, where the running time of the provers is poly(t), the number of provers is polylog(t), and the running time of the verifier is n*polylog(t). In particular, this shows that the class of languages that have polynomial-time MIPs that are sound against no-signaling strategies, is exactly EXP. Previously, this class was only known to contain PSPACE. To convert our MIP into a 1-round delegation scheme, we use the method suggested by Aiello et-al (ICALP, 2000), which makes use of a PIR scheme. This method lacked a proof of security. We prove that this method is secure assuming the underlying MIP is secure against no-signaling provers.

Exponential Separation of Information and Communication for Boolean Functions

We show an exponential gap between communication complexity and information complexity, by giving an explicit example of a partial boolean function with information complexity $\leq O(k)$, and distributional communication complexity $\geq 2^k$. This shows that a communication protocol cannot always be compressed to its internal information. By a result of Braverman, our gap is the largest possible. By a result of Braverman and Rao, our example shows a gap between communication complexity and amortized communication complexity, implying that a tight direct sum result for distributional communication complexity cannot hold, answering a long standing open problem. Another (conceptual) contribution of our work is the relative discrepancy method, a new rectangle-based method for proving communication complexity lower bounds for boolean functions, powerful enough to separate information complexity and communication complexity.

Robust Protocols for Securely Expanding Randomness and Distributing Keys Using Untrusted Quantum Devices

Randomness is a vital resource for modern day information processing, especially for cryptography. A wide range of applications critically rely on abundant, high quality random numbers generated securely. Here we show how to expand a random seed at an exponential rate without trusting the underlying quantum devices. Our approach is secure against the most general adversaries, and has the following new features: cryptographic level of security, tolerating a constant level of imprecision in the devices, requiring only a unit size quantum memory per device component for the honest implementation, and allowing a large natural class of constructions for the protocol. In conjunct with a recent work by Chung, Shi and Wu, it also leads to robust unbounded expansion using just 2 multi-part devices. When adapted for distributing cryptographic keys, our method achieves, for the first time, exponential expansion combined with cryptographic security and noise tolerance. The proof proceeds by showing that the Renyi divergence of the outputs of the protocol (for a specific bounding operator) decreases linearly as the protocol iterates. At the heart of the proof are a new uncertainty principle on quantum measurements, and a method for simulating trusted measurements with untrusted devices.

Approximate constraint satisfaction requires large LP relaxations

We prove super-polynomial lower bounds on the size of linear programming relaxations for approximation versions of constraint satisfaction problems. We show that for these problems, polynomial-sized linear programs are exactly as powerful as programs arising from a constant number of rounds of the Sherali-Adams hierarchy. In particular, any polynomial-sized linear program for Max Cut has an integrality gap of 1/2 and any such linear program for Max 3-Sat has an integrality gap of 7/8.

Coloring 3-colorable graphs with less than n^{1/5} colors

We consider the problem of coloring a 3-colorable graphs in polynomial time using as few colors as possible. We first present a new combinatorial algorithm using $\widetilde O(n^{4/11})$ colors. This is the first combinatorial improvement since Blum's $\widetilde O(n^{3/8})$ bound from FOCS'90. Like Blum's algorithm, our new algorithm composes immediately with recent semi-definite programming approaches, and improves the best bound for polynomial time algorithm for coloring of 3-colorable graphs from $O(n^{0.2072})$ colors by Chlamtac from FOCS'07 to $O(n^{0.2049})$ colors. Next we develop a new recursion tailored for combination with semi-definite approaches, bringing us further down to $O(n^{0.19996})$ colors.

Highway Dimension and Provably Efficient Shortest Path Algorithms

The Local Lemma is asymptotically tight for SAT

The Local Lemma is a fundamental tool of probabilistic combinatorics and theoretical computer science, yet there are hardly any natural problems known where it provides an asymptotically tight answer. The main theme of our paper is to identify several of these problems, among them a couple of widely studied extremal functions related to certain restricted versions of the k-SAT problem, where the Local Lemma does give essentially optimal answers. As our main contribution, we construct unsatisfiable k-CNF formulas where every clause has k distinct literals and every variable appears in at most (2/e + o(1))*2^k/k clauses. The Lopsided Local Lemma, applied with an assignment of random values according to counterintuitive probabilities, shows that this is asymptotically best possible. The determination of this extremal function is particularly important as it represents the value where the corresponding k-SAT problem exhibits a complexity hardness jump: from having every instance being a YES-instance it becomes NP-hard just by allowing each variable to occur in one more clause. The construction of our unsatisfiable CNF-formulas is based on the binary tree approach of [Gebauer, 2012] and thus the constructed formulas are in the class MU(1) of minimal unsatisfiable formulas having one more clauses than variables. The main novelty of our approach here comes in setting up an appropriate continuous approximation of the problem. This leads us to a differential equation, the solution of which we are able to estimate. The asymptotically optimal binary trees are then obtained through a discretization of this solution. The importance of the binary trees constructed is also underlined by their appearance in many other scenarios. In particular, they give asymptotically precise answers for seemingly unrelated problems like the European Tenure Game introduced by Doerr, and a search problem allowing a limited number of consecutive lies. As yet another consequence we slightly improve the best known bounds on the maximum degree and maximum edge-degree of a k-uniform Makers win hypergraph in the Neighborhood Conjecture of Beck.

(Meta) Kernelization

In a parameterized problem, every instance $I$ comes with a positive integer $k$. The problem is said to admit a {\em polynomial kernel} if, in polynomial time, one can reduce the size of the instance $I$ to a polynomial in $k$, while preserving the answer. In this work we give two meta-theorems on kernelzation. The first theorem says that all problems expressible in Counting Monadic Second Order Logic and satisfying a compactness property admit a polynomial kernel on graphs of bounded genus. Our second result is that all problems that have finite integer index and satisfy a weaker compactness condition admit a linear kernel on graphs of bounded genus. These theorems unify and extend { all} previously known kernelization results for planar graph problems. Combining our theorems with the Erd\H{o}s-P\'{o}sa property we obtain various new results on linear kernels for a number of packing and covering problems.

Playing Mastermind with Many Colors

Polynomial Bounds for the Grid-Minor Theorem

One of the key results in Robertson and Seymour's seminal work on graph minors is the Grid-Minor Theorem (also called the Excluded Grid Theorem). The theorem states that for every fixed-size grid H, every graph whose treewidth is large enough, contains H as a minor. This theorem has found many applications in graph theory and algorithms. Let f(k) denote the largest value, such that every graph of treewidth k contains a grid minor of size $f(k)\times f(k)$. The best current quantitative bound, due to recent work of Kawarabayashi and Kobayashi, and Leaf and Seymour, shows that $f(k)=\Omega(\sqrt{\log k/\log \log k})$. In contrast, the best known upper bound implies that $f(k) =O(\sqrt{k/\log k})$. In this paper we obtain the first polynomial relationship between treewidth and grid-minor size by showing that $f(k)=\Omega(k^{\delta})$ for some fixed constant $\delta > 0$, and describe an algorithm, whose running time is polynomial in $|V(G)|$ and $k$, that finds such a grid-minor.

Coin Flipping of Any Constant Bias Implies One-Way Functions

We show that the existence of a coin-flipping protocol safe against any non-trivial constant bias (e.g., .499) implies the existence of one-way functions. This improves upon a recent result of Haitner and Omri [FOCS '11], who proved this implication for protocols with bias (2 - 1)/2 - o(1) H .207. Unlike the result of Haitner and Omri, our result also holds for weak coin-flipping protocols.

Deterministic (Delta + 1)-coloring in sublinear (in Delta) time in static, dynamic and faulty networks

In the distributed message passing model a communication network is represented by an n-vertex graph G = (V,E) of maximum degree \Delta. Computation proceeds in discrete synchronous rounds consisting of sending and receiving messages and performing local computations. The running time of an algorithm is the number of rounds it requires. In the {static setting} the network remains unchanged throughout the entire execution. In the {dynamic setting} the topology of the network changes, and a new solution has to be computed after each change. In the {faulty setting} the network is static, but some vertices or edges may lose the computed solution as a result of faults. The goal of an algorithm in this setting is fixing the solution. The problems of (\Delta + 1)-vertex-coloring and (2\Delta - 1)-edge-coloring are among the most important and intensively studied problems in distributed computing. Despite a very intensive research in the last 30 years, no deterministic algorithms for these problems with sublinear (in \Delta) time have been known so far. Moreover, for more restricted scenarios and some related problems there are lower bounds of \Omega(\Delta). The question of the possibility to devise algorithms that overcome this challenging barrier is one of the most fundamental questions in distributed symmetry breaking. In this paper we settle this question for (\Delta + 1)-vertex-coloring and (2\Delta - 1)-edge-coloring by devising deterministic algorithms that require O(\Delta^{3/4} \log \Delta + \log^* n) time in the static, dynamic and faulty settings. (The term \log^* n is unavoidable in view of the lower bound of Linial.) Moreover, for (1 + o(1))Delta-vertex-coloring and (2 + o(1))Delta-edge-coloring we devise algorithms with ~O(\sqrt{\Delta} + \log^* n) deterministic time. This is roughly a quadratic improvement comparing to the state-of-the-art that requires O(\Delta + \log^* n) time. Our results are actually more general than that since they apply also to a variant of the list-coloring problem that generalizes ordinary coloring. Our results are obtained using a novel technique for coloring partially-colored graphs (also known as {fixing}). We partition the uncolored parts into a small number of subgraphs with certain helpful properties. Then we color these subgraphs gradually using a technique that employs {constructions of polynomials} in a novel way. Our construction is inspired by the algorithm of Linial for ordinary O(\Delta^2)-coloring. However, it is a more sophisticated construction that differs from that of Linial in several important respects. These new insights in using systems of polynomials allow us to significantly speed up the O(\Delta)-coloring algorithms. Moreover, they allow us to devise algorithms with the same running time also in the more complicated settings of dynamic and faulty networks.

Query Complexity of Approximate Nash Equilibria

We study the query complexity of approximate notions of Nash equilibrium in games with a large number of players n. Our main result states that for n-player binary-action games and for constant µ, the query complexity of an µ-well-supported Nash equilibrium is exponential in n. As a consequences of this result, we get an exponential lower bound on the rate of convergence of adaptive dynamics to approximate Nash equilibria.

All ACM Journals | See Full Journal Index