Journal of the ACM (JACM)

Latest Articles

Efficient Computation of Representative Families with Applications in Parameterized and Exact Algorithms

Let M=(E, I) be a matroid and let S={S1, ċ , St} be a family of... (more)

Optimal Mechanisms for Combinatorial Auctions and Combinatorial Public Projects via Convex Rounding

We design the first truthful-in-expectation, constant-factor approximation mechanisms for NP-hard... (more)

Are Lock-Free Concurrent Algorithms Practically Wait-Free?

Lock-free concurrent algorithms guarantee that some concurrent operation will always make progress in a finite number of steps. Yet programmers prefer... (more)

Optimal Rate Code Constructions for Computationally Simple Channels

We consider coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors... (more)

Query Complexity of Approximate Nash Equilibria

We study the query complexity of approximate notions of Nash equilibrium in games with a large number of players n. Our main result states that for n-player binary-action games and for constant ϵ, the query complexity of an ϵ-well-supported Nash equilibrium is exponential in n. As a consequence of this result, we get an exponential... (more)

The Complexity of Finite-Valued CSPs

We study the computational complexity of exact minimization of rational-valued discrete functions. Let Γ be a set of rational-valued functions on a fixed finite domain; such a set is called a finite-valued constraint language. The valued constraint satisfaction problem, VCSP(Γ), is the problem of minimizing a function given as a sum of... (more)


Important Note on P/NP: Some submissions purport to solve a long-standing open problem in complexity theory, such as the P/NP problem. Many of these turn out to be mistaken, and such submissions tax JACM volunteer editors and reviewers. JACM remains open to the possibility of eventual resolution of P/NP and related questions, and continues to welcome submissions on the subject. However, to mitigate the burden of repeated resubmissions due to incremental corrections of errors identified during editorial review, no author may submit more than one such paper to JACM, ACM Trans. on Algorithms, or ACM Trans. on Computation in any 24-month period, except by invitation of the Editor-in-Chief. This applies to resubmissions of previously rejected manuscripts. Please consider this policy before submitting a such a paper.

About JACM

The Journal of the ACM (JACM) provides coverage of the most significant work on principles of computer science, broadly construed. The scope of research we cover encompasses contributions of lasting value to any area of computer science. To be accepted, a paper must be judged to be truly outstanding in its field.  JACM is interested  in work in core computer science and at the boundaries, both the boundaries of subdisciplines of computer science and the boundaries between computer science and other fields.

read more
Forthcoming Articles
Analysis of a Classical Matrix Preconditioning Algorithm

We study a classical iterative algorithm for balancing matrices in the L norm via a scaling transformation. This algorithm, which goes back to Osborne and Parlett & Reinsch in the 1960s, is implemented as a standard preconditioner in many numerical linear algebra packages. Surprisingly, despite its widespread use over several decades, no bounds were known on its rate of convergence. In this paper we prove that, for any irreducible n×n (real or complex) input matrix A, a natural variant of the algorithm converges in O(n3 log(nÁ/µ)) elementary balancing operations, where Á measures the initial imbalance of A and µ is the target imbalance of the output matrix. (The imbalance of A is maxi |log(aiout/aiin)|, where aiout, aiin are the maximum entries in magnitude in the i'th row and column respectively.) This bound is tight up to the log n factor. A balancing operation scales the i'th row and column so that their maximum entries are equal, and requires O(m/n) arithmetic operations on average, where m is the number of non-zero elements in A. Thus the running time of the iterative algorithm is O~(n2m). This is the first time bound of any kind on any variant of the Osborne-Parlett-Reinsch algorithm. We also prove a conjecture of Chen that characterizes those matrices for which the limit of the balancing process is independent of the order in which balancing operations are performed.

A nearly optimal algorithm for deciding connectivity queries in smooth and bounded real algebraic sets

A roadmap for a semi-algebraic set $S$ is a curve which has a non-empty and connected intersection with all connected components of $S$. Hence, this kind of object, introduced by Canny, can be used to answer connectivity queries (with applications, for instance, to motion planning) but has also become of central importance in effective real algebraic geometry, since it is used in many higher-level algorithms. For a long time, the best known complexity result for computing roadmaps, due to Basu, Pollack and Roy, was $s^{d+1} D^{O(n^2)}$, where the input is given by $s$ polynomials of degree $D$ in $n$ variables, with $d \le n$ the dimension of an associated geometric object. In 2011, we introduced new proof techniques for establishing connectivity results in real algebraic sets. This gave us more freedom for the design of algorithms computing roadmaps and led us to a first probabilistic roadmap algorithm for smooth and bounded real hypersurfaces running in time $(nD)^{O(n^{1.5})}$. With Basu and Roy, we then obtained a deterministic algorithm for general real algebraic sets running in time $D^{O(n^{1.5})}$. Recently, Basu and Roy improved this result to obtain an algorithm computing a roadmap of degree polynomial in $n^{n\log^2(n)} D^{n\log(n)}$, in time polynomial in $n^{n\log^3(n)} D^{n\log^2(n)}$; this is close to the expected optimal $D^n$. In this paper, we provide a probabilistic algorithm which computes roadmaps for smooth and bounded real algebraic sets such that the output size and the running time are polynomial in $(nD)^{n\log(n)}$. More precisely, the running time of the algorithm is essentially subquadratic in the output size. Even under these extra assumptions, it is the first roadmap algorithm with output size and running time polynomial in $(nD)^{n\log(n)}$.

How to Delegate Computations: The Power of No-Signaling Proofs

We construct a 1-round delegation scheme (i.e., argument-system) for every language computable in time t=t(n), where the running time of the prover is poly(t) and the running time of the verifier is n*polylog(t). In particular, for every language in P we obtain a delegation scheme with almost linear time verification. Our construction relies on the existence of a computational sub-exponentially secure private information retrieval (PIR) scheme. The proof exploits a curious connection between the problem of computation delegation and the model of multi-prover interactive proofs that are sound against no-signaling (cheating) strategies, a model that was studied in the context of multi-prover interactive proofs with provers that share quantum entanglement, and is motivated by the physical principle that information cannot travel faster than light. For any language computable in time t=t(n), we construct a multi-prover interactive proof (MIP) that is sound against no-signaling strategies, where the running time of the provers is poly(t), the number of provers is polylog(t), and the running time of the verifier is n*polylog(t). In particular, this shows that the class of languages that have polynomial-time MIPs that are sound against no-signaling strategies, is exactly EXP. Previously, this class was only known to contain PSPACE. To convert our MIP into a 1-round delegation scheme, we use the method suggested by Aiello et-al (ICALP, 2000), which makes use of a PIR scheme. This method lacked a proof of security. We prove that this method is secure assuming the underlying MIP is secure against no-signaling provers.

Exponential Separation of Information and Communication for Boolean Functions

We show an exponential gap between communication complexity and information complexity, by giving an explicit example of a partial boolean function with information complexity $\leq O(k)$, and distributional communication complexity $\geq 2^k$. This shows that a communication protocol cannot always be compressed to its internal information. By a result of Braverman, our gap is the largest possible. By a result of Braverman and Rao, our example shows a gap between communication complexity and amortized communication complexity, implying that a tight direct sum result for distributional communication complexity cannot hold, answering a long standing open problem. Another (conceptual) contribution of our work is the relative discrepancy method, a new rectangle-based method for proving communication complexity lower bounds for boolean functions, powerful enough to separate information complexity and communication complexity.

Robust Protocols for Securely Expanding Randomness and Distributing Keys Using Untrusted Quantum Devices

Randomness is a vital resource for modern day information processing, especially for cryptography. A wide range of applications critically rely on abundant, high quality random numbers generated securely. Here we show how to expand a random seed at an exponential rate without trusting the underlying quantum devices. Our approach is secure against the most general adversaries, and has the following new features: cryptographic level of security, tolerating a constant level of imprecision in the devices, requiring only a unit size quantum memory per device component for the honest implementation, and allowing a large natural class of constructions for the protocol. In conjunct with a recent work by Chung, Shi and Wu, it also leads to robust unbounded expansion using just 2 multi-part devices. When adapted for distributing cryptographic keys, our method achieves, for the first time, exponential expansion combined with cryptographic security and noise tolerance. The proof proceeds by showing that the Renyi divergence of the outputs of the protocol (for a specific bounding operator) decreases linearly as the protocol iterates. At the heart of the proof are a new uncertainty principle on quantum measurements, and a method for simulating trusted measurements with untrusted devices.

The Power of Localization for Efficiently Learning Linear Separators with Noise

We introduce a new approach for designing computationally efficient learning algorithms that are tolerant to noise, and demonstrate its effectiveness by designing algorithms with improved noise tolerance guarantees for learning linear separators. We consider the malicious noise model of Valiant and the adversarial label noise model of Kearns, Schapire, and Sellie. For malicious noise, where the adversary can corrupt both the label part and the feature part of examples, we provide a polynomial-time algorithm for learning linear separators under isotropic log-concave distributions with nearly information-theoretically optimal noise tolerance that is within a constant factor of the desired accuracy. For the adversarial label noise model, where the distribution over the feature vectors is unchanged, and the overall probability of a noisy label is constrained, we also give a polynomial-time algorithm for learning linear separators under isotropic log-concave distributions that can handle a noise rate within a constant factor of the desired accuracy. Our algorithms are also efficient in the active learning setting, where learning algorithms only receive the classifications of examples when they ask for them. We show that, in this model, our algorithms achieve a label complexity whose dependence on the error parameter is polylogarithmic (and thus exponentially better than that of any passive algorithm). This provides the first polynomial-time active learning algorithm for learning linear separators in the presence of malicious noise or adversarial label noise. Our algorithms and analysis combine several ingredients including aggressive localization, minimization of a progressively rescaled hinge loss, and a novel localized and soft outlier removal procedure. We use localization techniques (previously used for obtaining better sample complexity results) in order to obtain better noise-tolerant polynomial-time algorithms.

Approximate constraint satisfaction requires large LP relaxations

We prove super-polynomial lower bounds on the size of linear programming relaxations for approximation versions of constraint satisfaction problems. We show that for these problems, polynomial-sized linear programs are exactly as powerful as programs arising from a constant number of rounds of the Sherali-Adams hierarchy. In particular, any polynomial-sized linear program for Max Cut has an integrality gap of 1/2 and any such linear program for Max 3-Sat has an integrality gap of 7/8.

Coloring 3-colorable graphs with less than n^{1/5} colors

We consider the problem of coloring a 3-colorable graphs in polynomial time using as few colors as possible. We first present a new combinatorial algorithm using $\widetilde O(n^{4/11})$ colors. This is the first combinatorial improvement since Blum's $\widetilde O(n^{3/8})$ bound from FOCS'90. Like Blum's algorithm, our new algorithm composes immediately with recent semi-definite programming approaches, and improves the best bound for polynomial time algorithm for coloring of 3-colorable graphs from $O(n^{0.2072})$ colors by Chlamtac from FOCS'07 to $O(n^{0.2049})$ colors. Next we develop a new recursion tailored for combination with semi-definite approaches, bringing us further down to $O(n^{0.19996})$ colors.

Upward Max Min Fairness

Often one would like to allocate shared resources in a fair way. A common and well studied notion of fairness is {\em Max-Min Fairness}, where we first maximize the smallest allocation, and subject to that the second smallest, and so on. We consider a networking application where multiple commodities compete over the capacity of a network. In our setting each commodity has multiple possible paths to route its demand (for example, a network using MPLS tunneling). In this setting, the only known way of finding a max-min fair allocation requires an iterative solution of multiple linear programs. Such an approach, although polynomial time, scales badly with the size of the network, the number of demands, and the number of paths. More importantly, a network operator has limited control and understanding of the inner working of the algorithm. Finally, this approach is inherently centralized and cannot be implemented via a distributed protocol. In this paper we introduce Upward Max-Min Fairness, a novel relaxation of Max-Min Fairness and present a family of simple dynamics that converge to it. These dynamics can be implemented in a distributed manner. Moreover, we present an efficient combinatorial algorithm for finding an upward max-min fair allocation. This algorithm is a natural extension of the well known Water Filling Algorithm for a multiple path setting. We test the expected behavior of this new algorithm and show that on realistic networks upward max-min fair allocations are comparable to the max-min fair allocations both in fairness and in network utilization.

Highway Dimension and Provably Efficient Shortest Path Algorithms

The Local Lemma is asymptotically tight for SAT

The Local Lemma is a fundamental tool of probabilistic combinatorics and theoretical computer science, yet there are hardly any natural problems known where it provides an asymptotically tight answer. The main theme of our paper is to identify several of these problems, among them a couple of widely studied extremal functions related to certain restricted versions of the k-SAT problem, where the Local Lemma does give essentially optimal answers. As our main contribution, we construct unsatisfiable k-CNF formulas where every clause has k distinct literals and every variable appears in at most (2/e + o(1))*2^k/k clauses. The Lopsided Local Lemma, applied with an assignment of random values according to counterintuitive probabilities, shows that this is asymptotically best possible. The determination of this extremal function is particularly important as it represents the value where the corresponding k-SAT problem exhibits a complexity hardness jump: from having every instance being a YES-instance it becomes NP-hard just by allowing each variable to occur in one more clause. The construction of our unsatisfiable CNF-formulas is based on the binary tree approach of [Gebauer, 2012] and thus the constructed formulas are in the class MU(1) of minimal unsatisfiable formulas having one more clauses than variables. The main novelty of our approach here comes in setting up an appropriate continuous approximation of the problem. This leads us to a differential equation, the solution of which we are able to estimate. The asymptotically optimal binary trees are then obtained through a discretization of this solution. The importance of the binary trees constructed is also underlined by their appearance in many other scenarios. In particular, they give asymptotically precise answers for seemingly unrelated problems like the European Tenure Game introduced by Doerr, and a search problem allowing a limited number of consecutive lies. As yet another consequence we slightly improve the best known bounds on the maximum degree and maximum edge-degree of a k-uniform Makers win hypergraph in the Neighborhood Conjecture of Beck.

Homotopy-initial algebras in type theory

We investigate inductive types in type theory, using the insights provided by ho- motopy type theory and univalent foundations of mathematics. We do so by introducing the new notion of a homotopy-initial algebra. This notion is defined by a purely type-theoretic contractibility condition which replaces the standard, category-theoretic universal property in- volving the existence and uniqueness of appropriate morphisms. Our main result characterises the types that are equivalent to W-types as homotopy-initial algebras.

(Meta) Kernelization

In a parameterized problem, every instance $I$ comes with a positive integer $k$. The problem is said to admit a {\em polynomial kernel} if, in polynomial time, one can reduce the size of the instance $I$ to a polynomial in $k$, while preserving the answer. In this work we give two meta-theorems on kernelzation. The first theorem says that all problems expressible in Counting Monadic Second Order Logic and satisfying a compactness property admit a polynomial kernel on graphs of bounded genus. Our second result is that all problems that have finite integer index and satisfy a weaker compactness condition admit a linear kernel on graphs of bounded genus. These theorems unify and extend { all} previously known kernelization results for planar graph problems. Combining our theorems with the Erd\H{o}s-P\'{o}sa property we obtain various new results on linear kernels for a number of packing and covering problems.

A Temporal-Logic Approach to Binding-Time Analysis

This paper demonstrates that there is a fundamental relationship between temporal logic and languages that involve multiple stages, such as those used to analyze binding times in the context of partial evaluation. This relationship is based on an extension of the Curry-Howard isomorphism, which identifies proofs with programs, and propositions with types. Our extension involves the ``next time'' (Ë) operator from linear-time temporal logic, and yields a »-calculus that we call »Ë with types of the form ËA for expressions in the subsequent stage, including appropriate introduction and elimination forms. We demonstrate that »Ë is equivalent to the core of a previously studied multi-level binding-time analysis. This is similar to work by Davies and Pfenning on staged computation based on the necessity (¡) operator of modal logic, but ¡ only allows closed code, and naturally supports a code evaluation construct, while Ë captures open code, thus is more flexible, but is incompatible with such a construct. Instead code evaluation is an external global operation that is validated by the proof theory regarding closed proofs of Ë formulas. We demonstrate the relevance of »Ë to staged computation directly by showing that that normalization can be done in an order strictly following the times of the logic. We also extend »Ë to a small functional language, and show that it would serve as a suitable basis for directly programming with multiple stages by presenting some example programs.

Faster polynomial multiplication over finite fields

Polynomials over finite fields play a central role in algorithms for cryptography, error correcting codes, and computer algebra. The complexity of multiplying such polynomials is still a major open problem. Let p be a prime, and let M_p(n) denote the bit complexity of multiplying two polynomials in F_p[X] of degree less than n. For n large compared to p, we establish the bound M_p(n) = O(n log n 8^(log* n) log p), where log* n=min {k:log ...k times... log n <= 1} stands for the iterated logarithm. This improves on the previously best known bound M_p(n) = O(n log n log log n log p), which essentially goes back to the seventies.

Playing Mastermind with Many Colors

Polynomial Bounds for the Grid-Minor Theorem

One of the key results in Robertson and Seymour's seminal work on graph minors is the Grid-Minor Theorem (also called the Excluded Grid Theorem). The theorem states that for every fixed-size grid H, every graph whose treewidth is large enough, contains H as a minor. This theorem has found many applications in graph theory and algorithms. Let f(k) denote the largest value, such that every graph of treewidth k contains a grid minor of size $f(k)\times f(k)$. The best current quantitative bound, due to recent work of Kawarabayashi and Kobayashi, and Leaf and Seymour, shows that $f(k)=\Omega(\sqrt{\log k/\log \log k})$. In contrast, the best known upper bound implies that $f(k) =O(\sqrt{k/\log k})$. In this paper we obtain the first polynomial relationship between treewidth and grid-minor size by showing that $f(k)=\Omega(k^{\delta})$ for some fixed constant $\delta > 0$, and describe an algorithm, whose running time is polynomial in $|V(G)|$ and $k$, that finds such a grid-minor.

Coin Flipping of Any Constant Bias Implies One-Way Functions

We show that the existence of a coin-flipping protocol safe against any non-trivial constant bias (e.g., .499) implies the existence of one-way functions. This improves upon a recent result of Haitner and Omri [FOCS '11], who proved this implication for protocols with bias (2 - 1)/2 - o(1) H .207. Unlike the result of Haitner and Omri, our result also holds for weak coin-flipping protocols.

Deterministic (Delta + 1)-coloring in sublinear (in Delta) time in static, dynamic and faulty networks

In the distributed message passing model a communication network is represented by an n-vertex graph G = (V,E) of maximum degree \Delta. Computation proceeds in discrete synchronous rounds consisting of sending and receiving messages and performing local computations. The running time of an algorithm is the number of rounds it requires. In the {static setting} the network remains unchanged throughout the entire execution. In the {dynamic setting} the topology of the network changes, and a new solution has to be computed after each change. In the {faulty setting} the network is static, but some vertices or edges may lose the computed solution as a result of faults. The goal of an algorithm in this setting is fixing the solution. The problems of (\Delta + 1)-vertex-coloring and (2\Delta - 1)-edge-coloring are among the most important and intensively studied problems in distributed computing. Despite a very intensive research in the last 30 years, no deterministic algorithms for these problems with sublinear (in \Delta) time have been known so far. Moreover, for more restricted scenarios and some related problems there are lower bounds of \Omega(\Delta). The question of the possibility to devise algorithms that overcome this challenging barrier is one of the most fundamental questions in distributed symmetry breaking. In this paper we settle this question for (\Delta + 1)-vertex-coloring and (2\Delta - 1)-edge-coloring by devising deterministic algorithms that require O(\Delta^{3/4} \log \Delta + \log^* n) time in the static, dynamic and faulty settings. (The term \log^* n is unavoidable in view of the lower bound of Linial.) Moreover, for (1 + o(1))Delta-vertex-coloring and (2 + o(1))Delta-edge-coloring we devise algorithms with ~O(\sqrt{\Delta} + \log^* n) deterministic time. This is roughly a quadratic improvement comparing to the state-of-the-art that requires O(\Delta + \log^* n) time. Our results are actually more general than that since they apply also to a variant of the list-coloring problem that generalizes ordinary coloring. Our results are obtained using a novel technique for coloring partially-colored graphs (also known as {fixing}). We partition the uncolored parts into a small number of subgraphs with certain helpful properties. Then we color these subgraphs gradually using a technique that employs {constructions of polynomials} in a novel way. Our construction is inspired by the algorithm of Linial for ordinary O(\Delta^2)-coloring. However, it is a more sophisticated construction that differs from that of Linial in several important respects. These new insights in using systems of polynomials allow us to significantly speed up the O(\Delta)-coloring algorithms. Moreover, they allow us to devise algorithms with the same running time also in the more complicated settings of dynamic and faulty networks.

All ACM Journals | See Full Journal Index