ACM DL

Journal of the ACM (JACM)

Menu
Latest Articles

The Locality of Distributed Symmetry Breaking

Symmetry-breaking problems are among the most well studied in the field of distributed computing and yet the most fundamental questions about their complexity remain open. In this article we work in the LOCAL model (where the input graph and underlying distributed network are identical) and study the randomized complexity of four fundamental... (more)

Sample Compression Schemes for VC Classes

Sample compression schemes were defined by Littlestone and Warmuth (1986) as an abstraction of the structure underlying many learning algorithms. Roughly speaking, a sample compression scheme of size k means that given an arbitrary list of labeled examples, one can retain only k of them in a way that allows us to recover the labels of all other... (more)

Random Walks That Find Perfect Objects and the Lovász Local Lemma

We give an algorithmic local lemma by establishing a sufficient condition for the uniform random walk on a directed graph to reach a sink quickly. Our... (more)

On the Complexity of the Orbit Problem

We consider higher-dimensional versions of Kannan and Lipton’s Orbit Problem—determining whether a target vector space ν may be reached from a starting point x under repeated applications of a linear transformation A. Answering two questions posed by Kannan and Lipton in the 1980s, we show that when ν has dimension one, this... (more)

Formally Reasoning About Quality

In recent years, there has been a growing need and interest in formally reasoning about the quality of software and hardware systems. As opposed to traditional verification, in which one considers the question of whether a system satisfies a given specification or not, reasoning about quality addresses the question of how well the system satisfies... (more)

Semantics Out of Context

Call a semantics for a language with variables absolute when variables map to fixed entities in the denotation. That is, a semantics is absolute when the denotation of a variable a is a copy of itself in the denotation. We give a trio of lattice-based, sets-based, and algebraic absolute semantics to first-order logic. Possibly open predicates are... (more)

NEWS

Important Note on P/NP: Some submissions purport to solve a long-standing open problem in complexity theory, such as the P/NP problem. Many of these turn out to be mistaken, and such submissions tax JACM volunteer editors and reviewers. JACM remains open to the possibility of eventual resolution of P/NP and related questions, and continues to welcome submissions on the subject. However, to mitigate the burden of repeated resubmissions due to incremental corrections of errors identified during editorial review, no author may submit more than one such paper to JACM, ACM Trans. on Algorithms, or ACM Trans. on Computation in any 24-month period, except by invitation of the Editor-in-Chief. This applies to resubmissions of previously rejected manuscripts. Please consider this policy before submitting a such a paper.

About JACM

The Journal of the ACM (JACM) provides coverage of the most significant work on principles of computer science, broadly construed. The scope of research we cover encompasses contributions of lasting value to any area of computer science. To be accepted, a paper must be judged to be truly outstanding in its field.  JACM is interested  in work in core computer science and at the boundaries, both the boundaries of subdisciplines of computer science and the boundaries between computer science and other fields.

read more
Forthcoming Articles

The complexity of finite-valued CSPs

Analysis of a Classical Matrix Preconditioning Algorithm

We study a classical iterative algorithm for balancing matrices in the L norm via a scaling transformation. This algorithm, which goes back to Osborne and Parlett & Reinsch in the 1960s, is implemented as a standard preconditioner in many numerical linear algebra packages. Surprisingly, despite its widespread use over several decades, no bounds were known on its rate of convergence. In this paper we prove that, for any irreducible n×n (real or complex) input matrix A, a natural variant of the algorithm converges in O(n3 log(nÁ/µ)) elementary balancing operations, where Á measures the initial imbalance of A and µ is the target imbalance of the output matrix. (The imbalance of A is maxi |log(aiout/aiin)|, where aiout, aiin are the maximum entries in magnitude in the i'th row and column respectively.) This bound is tight up to the log n factor. A balancing operation scales the i'th row and column so that their maximum entries are equal, and requires O(m/n) arithmetic operations on average, where m is the number of non-zero elements in A. Thus the running time of the iterative algorithm is O~(n2m). This is the first time bound of any kind on any variant of the Osborne-Parlett-Reinsch algorithm. We also prove a conjecture of Chen that characterizes those matrices for which the limit of the balancing process is independent of the order in which balancing operations are performed.

How to Delegate Computations: The Power of No-Signaling Proofs

We construct a 1-round delegation scheme (i.e., argument-system) for every language computable in time t=t(n), where the running time of the prover is poly(t) and the running time of the verifier is n*polylog(t). In particular, for every language in P we obtain a delegation scheme with almost linear time verification. Our construction relies on the existence of a computational sub-exponentially secure private information retrieval (PIR) scheme. The proof exploits a curious connection between the problem of computation delegation and the model of multi-prover interactive proofs that are sound against no-signaling (cheating) strategies, a model that was studied in the context of multi-prover interactive proofs with provers that share quantum entanglement, and is motivated by the physical principle that information cannot travel faster than light. For any language computable in time t=t(n), we construct a multi-prover interactive proof (MIP) that is sound against no-signaling strategies, where the running time of the provers is poly(t), the number of provers is polylog(t), and the running time of the verifier is n*polylog(t). In particular, this shows that the class of languages that have polynomial-time MIPs that are sound against no-signaling strategies, is exactly EXP. Previously, this class was only known to contain PSPACE. To convert our MIP into a 1-round delegation scheme, we use the method suggested by Aiello et-al (ICALP, 2000), which makes use of a PIR scheme. This method lacked a proof of security. We prove that this method is secure assuming the underlying MIP is secure against no-signaling provers.

Exponential Separation of Information and Communication for Boolean Functions

We show an exponential gap between communication complexity and information complexity, by giving an explicit example of a partial boolean function with information complexity $\leq O(k)$, and distributional communication complexity $\geq 2^k$. This shows that a communication protocol cannot always be compressed to its internal information. By a result of Braverman, our gap is the largest possible. By a result of Braverman and Rao, our example shows a gap between communication complexity and amortized communication complexity, implying that a tight direct sum result for distributional communication complexity cannot hold, answering a long standing open problem. Another (conceptual) contribution of our work is the relative discrepancy method, a new rectangle-based method for proving communication complexity lower bounds for boolean functions, powerful enough to separate information complexity and communication complexity.

Robust Protocols for Securely Expanding Randomness and Distributing Keys Using Untrusted Quantum Devices

Randomness is a vital resource for modern day information processing, especially for cryptography. A wide range of applications critically rely on abundant, high quality random numbers generated securely. Here we show how to expand a random seed at an exponential rate without trusting the underlying quantum devices. Our approach is secure against the most general adversaries, and has the following new features: cryptographic level of security, tolerating a constant level of imprecision in the devices, requiring only a unit size quantum memory per device component for the honest implementation, and allowing a large natural class of constructions for the protocol. In conjunct with a recent work by Chung, Shi and Wu, it also leads to robust unbounded expansion using just 2 multi-part devices. When adapted for distributing cryptographic keys, our method achieves, for the first time, exponential expansion combined with cryptographic security and noise tolerance. The proof proceeds by showing that the Renyi divergence of the outputs of the protocol (for a specific bounding operator) decreases linearly as the protocol iterates. At the heart of the proof are a new uncertainty principle on quantum measurements, and a method for simulating trusted measurements with untrusted devices.

Constant rate PCPs for CircuitSat with sublinear query complexity

Approximate constraint satisfaction requires large LP relaxations

We prove super-polynomial lower bounds on the size of linear programming relaxations for approximation versions of constraint satisfaction problems. We show that for these problems, polynomial-sized linear programs are exactly as powerful as programs arising from a constant number of rounds of the Sherali-Adams hierarchy. In particular, any polynomial-sized linear program for Max Cut has an integrality gap of 1/2 and any such linear program for Max 3-Sat has an integrality gap of 7/8.

Optimal Rate Code Constructions for Computationally Simple Channels

We consider coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors is bounded with high probability by a parameter p and (b) the process which adds the errors can be described by a sufficiently simple circuit. Codes for such channel models are attractive since, like codes for standard adversarial errors, they can handle channels whose true behavior is unknown or varying over time. For two classes of channels, we provide explicit, efficiently encodable/decodable codes of optimal rate where only inefficiently decodable codes were previously known. In each case, we provide one en- coder/decoder that works for every channel in the class. The encoders are randomized, and probabilities are taken over the (local, unknown to the decoder) coins of the encoder and those of the channel. Unique decoding for additive errors: We give the first construction of a polynomial-time encod- able/decodable code for additive (a.k.a. oblivious) channels that achieve the Shannon capacity 1  H(p). These are channels which add an arbitrary error vector e  {0, 1}^N of weight at most pN to the transmitted word; the vector e can depend on the code but not on the particular transmitted word. Such channels capture binary symmetric errors and burst errors as special cases. List-decoding for polynomial-time channels: For every constant c > 0, we give a Monte Carlo construction of an code with optimal rate (arbitrarily close to 1  H(p)) that efficiently recovers a short list containing the correct message with high probability for channels describable by circuits of size at most N^c. We are not aware of any channel models considered in the information theory literature, other than purely adversarial channels, which require more than linear-size circuits to implement. We justify the relaxation to list-decoding with an impossibility result showing that, in a large range of parameters (p > 1/4), codes that are uniquely decodable for a modest class of channels (online, memoryless, nonuniform channels) cannot have positive rate.

2-Server PIR with Sub-Polynomial Communication

A 2-server Private Information Retrieval (PIR) scheme allows a user to retrieve the $i$th bit of an $n$-bit database replicated among two non-communicating servers, while not revealing any information about $i$ to either server. In this work we construct a 2-server PIR scheme with total communication cost $n^{O\left(\sqrt{\frac{\log\log n}{\log n}}\right)}$. This improves over current 2-server protocols which all require $\Omega(n^{1/3})$ communication. Our construction circumvents the $n^{1/3}$ barrier of \cite{RazborovY06} which holds for the restricted model of bilinear group-based schemes (covering all previous 2-server schemes). The improvement comes from reducing the number of servers in existing protocols, based on Matching Vector Codes, from 3 or 4 servers to 2. This is achieved by viewing these protocols in an algebraic way (using polynomial interpolation) and extending them using partial derivatives.

Highway Dimension and Provably Efficient Shortest Path Algorithms

The Local Lemma is asymptotically tight for SAT

The Local Lemma is a fundamental tool of probabilistic combinatorics and theoretical computer science, yet there are hardly any natural problems known where it provides an asymptotically tight answer. The main theme of our paper is to identify several of these problems, among them a couple of widely studied extremal functions related to certain restricted versions of the k-SAT problem, where the Local Lemma does give essentially optimal answers. As our main contribution, we construct unsatisfiable k-CNF formulas where every clause has k distinct literals and every variable appears in at most (2/e + o(1))*2^k/k clauses. The Lopsided Local Lemma, applied with an assignment of random values according to counterintuitive probabilities, shows that this is asymptotically best possible. The determination of this extremal function is particularly important as it represents the value where the corresponding k-SAT problem exhibits a complexity hardness jump: from having every instance being a YES-instance it becomes NP-hard just by allowing each variable to occur in one more clause. The construction of our unsatisfiable CNF-formulas is based on the binary tree approach of [Gebauer, 2012] and thus the constructed formulas are in the class MU(1) of minimal unsatisfiable formulas having one more clauses than variables. The main novelty of our approach here comes in setting up an appropriate continuous approximation of the problem. This leads us to a differential equation, the solution of which we are able to estimate. The asymptotically optimal binary trees are then obtained through a discretization of this solution. The importance of the binary trees constructed is also underlined by their appearance in many other scenarios. In particular, they give asymptotically precise answers for seemingly unrelated problems like the European Tenure Game introduced by Doerr, and a search problem allowing a limited number of consecutive lies. As yet another consequence we slightly improve the best known bounds on the maximum degree and maximum edge-degree of a k-uniform Makers win hypergraph in the Neighborhood Conjecture of Beck.

Efficient Computation of Representative Families with Applications in Parameterized and Exact Algorithms

Let M = (E,I) be a matroid and let S={S_1, ... , S_t} be a family of subsets of E of size p. A subfamily subset of S is q-representative for S if for every set Y subset of E of size at most q, if there is a set X in S disjoint from Y with X \cup Y in I, then there is a set in disjoint from Y with \cup Y in I. By the classical result of Bollobas, in a uniform matroid, every family of sets of size p has a q-representative family with at most (p+q choose p) sets. In his famous ``two families theorem'' from 1977, Lovasz proved that the same bound also holds for any matroid representable over a field F. As observed by Marx, Lovasz's proof is constructive. In this paper we show how Lovasz's proof can be turned into an algorithm constructing a q-representative family of size at most (p+q choose p) in time bounded by a polynomial in (p+q choose p), t, and the time required for field operations. We demonstrate how the efficient construction of representative families can be a powerful tool for designing single-exponential parameterized and exact exponential time algorithms. The applications of our approach include the following. - In the LONGEST DIRECTED CYCLE problem the input is a directed n-vertex graph G and the positive integer k. The task is to find a directed cycle of length at least k in G, if such a cycle exists. As a consequence of our 8^{k+o(k)} n^{O(1)} time algorithm, we have that a directed cycle of length at least log(n), if such cycle exists, can be found in polynomial time. - In the MINIMUM EQUIVALENT GRAPH (MEG) problem we are seeking a spanning subdigraph D' of a given n-vertex digraph D with as few arcs as possible in which the reachability relation is the same as in the original digraph D. The existence of a single-exponential c^n-time algorithm for some constant c>1 for MEG was open since the work of Moyles and Thompson [JACM 1969]. - To demonstrate the diversity of applications of the approach, we provide an alternative proof of the recent results recently for algorithms on graphs of bounded treewidth, showing that many ``connectivity'' problems such as HAMILTONIAN CYCLE or STEINER TREE can be solved in time 2^O(t) * n on n-vertex graphs of treewidth at most t. We believe that expressing graph problems in ``matroid language'' shed light on what makes it possible to solve connectivity problems single-exponential time parameterized by treewidth. For the special case of uniform matroids on n elements, we give a faster algorithm computing a representative family. We use this algorithm to provide the fastest known deterministic parameterized algorithms for k-PATH, k-TREE, and more generally, for k-SUBGRAPH ISOMORPHISM, where the k-vertex pattern graph is of constant treewidth.

(Meta) Kernelization

In a parameterized problem, every instance $I$ comes with a positive integer $k$. The problem is said to admit a {\em polynomial kernel} if, in polynomial time, one can reduce the size of the instance $I$ to a polynomial in $k$, while preserving the answer. In this work we give two meta-theorems on kernelzation. The first theorem says that all problems expressible in Counting Monadic Second Order Logic and satisfying a compactness property admit a polynomial kernel on graphs of bounded genus. Our second result is that all problems that have finite integer index and satisfy a weaker compactness condition admit a linear kernel on graphs of bounded genus. These theorems unify and extend { all} previously known kernelization results for planar graph problems. Combining our theorems with the Erd\H{o}s-P\'{o}sa property we obtain various new results on linear kernels for a number of packing and covering problems.

Foreword: volume 63 issue 4

Optimal Mechanisms for Combinatorial Auctions and Combinatorial Public Projects via Convex Rounding

Playing Mastermind with Many Colors

Are Lock-Free Concurrent Algorithms Practically Wait-Free?

Lock-free concurrent algorithms guarantee that \emph{some} concurrent operation will always make progress in a finite number of steps. Yet programmers prefer to treat concurrent code as if it were \emph{wait-free}, guaranteeing that \emph{all} operations always make progress. Unfortunately, designing wait-free algorithms is generally a very complex task, and the resulting algorithms are not always efficient. While obtaining efficient wait-free algorithms has been a long-time goal for the theory community, most non-blocking commercial code is only lock-free. This paper suggests a simple solution to this problem. We show that, for a large class of lock-free algorithms, under scheduling conditions which approximate those found in commercial hardware architectures, lock-free algorithms behave as if they are wait-free. In other words, programmers can keep on designing simple lock-free algorithms instead of complex wait-free ones, and in practice, they will get wait-free progress. Our main contribution is a new way of analyzing a general class of lock-free algorithms under a \emph{stochastic scheduler}. Our analysis relates the individual performance of processes with the global performance of the system using \emph{Markov chain lifting} between a complex per-process chain and a simpler system progress chain. We show that lock-free algorithms are not only wait-free with probability $1$, but that in fact a general subset of lock-free algorithms can be closely bounded in terms of the average number of steps required until an operation completes. To the best of our knowledge, this is the first attempt to analyze progress conditions, typically stated in relation to a worst case adversary, in a stochastic model capturing their expected asymptotic behavior.

Coin Flipping of Any Constant Bias Implies One-Way Functions

We show that the existence of a coin-flipping protocol safe against any non-trivial constant bias (e.g., .499) implies the existence of one-way functions. This improves upon a recent result of Haitner and Omri [FOCS '11], who proved this implication for protocols with bias (2 - 1)/2 - o(1) H .207. Unlike the result of Haitner and Omri, our result also holds for weak coin-flipping protocols.

Deterministic (Delta + 1)-coloring in sublinear (in Delta) time in static, dynamic and faulty networks

In the distributed message passing model a communication network is represented by an n-vertex graph G = (V,E) of maximum degree \Delta. Computation proceeds in discrete synchronous rounds consisting of sending and receiving messages and performing local computations. The running time of an algorithm is the number of rounds it requires. In the {static setting} the network remains unchanged throughout the entire execution. In the {dynamic setting} the topology of the network changes, and a new solution has to be computed after each change. In the {faulty setting} the network is static, but some vertices or edges may lose the computed solution as a result of faults. The goal of an algorithm in this setting is fixing the solution. The problems of (\Delta + 1)-vertex-coloring and (2\Delta - 1)-edge-coloring are among the most important and intensively studied problems in distributed computing. Despite a very intensive research in the last 30 years, no deterministic algorithms for these problems with sublinear (in \Delta) time have been known so far. Moreover, for more restricted scenarios and some related problems there are lower bounds of \Omega(\Delta). The question of the possibility to devise algorithms that overcome this challenging barrier is one of the most fundamental questions in distributed symmetry breaking. In this paper we settle this question for (\Delta + 1)-vertex-coloring and (2\Delta - 1)-edge-coloring by devising deterministic algorithms that require O(\Delta^{3/4} \log \Delta + \log^* n) time in the static, dynamic and faulty settings. (The term \log^* n is unavoidable in view of the lower bound of Linial.) Moreover, for (1 + o(1))Delta-vertex-coloring and (2 + o(1))Delta-edge-coloring we devise algorithms with ~O(\sqrt{\Delta} + \log^* n) deterministic time. This is roughly a quadratic improvement comparing to the state-of-the-art that requires O(\Delta + \log^* n) time. Our results are actually more general than that since they apply also to a variant of the list-coloring problem that generalizes ordinary coloring. Our results are obtained using a novel technique for coloring partially-colored graphs (also known as {fixing}). We partition the uncolored parts into a small number of subgraphs with certain helpful properties. Then we color these subgraphs gradually using a technique that employs {constructions of polynomials} in a novel way. Our construction is inspired by the algorithm of Linial for ordinary O(\Delta^2)-coloring. However, it is a more sophisticated construction that differs from that of Linial in several important respects. These new insights in using systems of polynomials allow us to significantly speed up the O(\Delta)-coloring algorithms. Moreover, they allow us to devise algorithms with the same running time also in the more complicated settings of dynamic and faulty networks.

Query Complexity of Approximate Nash Equilibria

We study the query complexity of approximate notions of Nash equilibrium in games with a large number of players n. Our main result states that for n-player binary-action games and for constant µ, the query complexity of an µ-well-supported Nash equilibrium is exponential in n. As a consequences of this result, we get an exponential lower bound on the rate of convergence of adaptive dynamics to approximate Nash equilibria.

All ACM Journals | See Full Journal Index