ACM DL

Journal of the ACM (JACM)

Menu
Latest Articles

A Nearly Optimal Algorithm for Deciding Connectivity Queries in Smooth and Bounded Real Algebraic Sets

A roadmap for a semi-algebraic set S is a curve which has a non-empty and connected intersection... (more)

Belief Propagation Guided Decimation Fails on Random Formulas

Let Φ be a uniformly distributed random k-SAT formula with n variables and m clauses. Nonconstructive arguments show that Φ is satisfiable for... (more)

The Power of Localization for Efficiently Learning Linear Separators with Noise

We introduce a new approach for designing computationally efficient learning algorithms that are tolerant to noise, and we demonstrate its... (more)

Homotopy-Initial Algebras in Type Theory

We investigate inductive types in type theory, using the insights provided by homotopy type theory and univalent foundations of mathematics. We do so by introducing the new notion of a homotopy-initial algebra. This notion is defined by a purely type-theoretic contractibility condition that replaces the standard, category-theoretic universal... (more)

Faster Polynomial Multiplication over Finite Fields

Polynomials over finite fields play a central role in algorithms for cryptography, error correcting codes, and computer algebra. The complexity of... (more)

NEWS

Important Note on P/NP: Some submissions purport to solve a long-standing open problem in complexity theory, such as the P/NP problem. Many of these turn out to be mistaken, and such submissions tax JACM volunteer editors and reviewers. JACM remains open to the possibility of eventual resolution of P/NP and related questions, and continues to welcome submissions on the subject. However, to mitigate the burden of repeated resubmissions due to incremental corrections of errors identified during editorial review, no author may submit more than one such paper to JACM, ACM Trans. on Algorithms, or ACM Trans. on Computation in any 24-month period, except by invitation of the Editor-in-Chief. This applies to resubmissions of previously rejected manuscripts. Please consider this policy before submitting a such a paper.

About JACM

The Journal of the ACM (JACM) provides coverage of the most significant work on principles of computer science, broadly construed. The scope of research we cover encompasses contributions of lasting value to any area of computer science. To be accepted, a paper must be judged to be truly outstanding in its field.  JACM is interested  in work in core computer science and at the boundaries, both the boundaries of subdisciplines of computer science and the boundaries between computer science and other fields.

read more
Forthcoming Articles
The freezing threshold for k-colourings of a random graph

We determine the exact value of the freezing threshold, r_k, for k-colourings of a random graph when k g 14. We prove that for random graphs with density above r_k, almost every colouring is such that a linear number of vertices are frozen, meaning that their colours cannot be changed by a sequence of alterations whereby we change the colours of o(n) vertices at a time, always obtaining another proper colouring. When the density is below r_k, then almost every colouring is such that every vertex can be changed by a sequence of alterations where we change O(log n) vertices at a time. Frozen vertices are a key part of the clustering phenomena discovered using methods from statistical physics. The value of the freezing threshold was previously determined by the non-rigorous cavity method.

Amplifiers for the Moran Process

The Moran process, as studied by Lieberman, Hauert and Nowak, is a randomised algorithm modelling the spread of genetic mutations in populations. The algorithm runs on an underlying graph where individuals correspond to vertices. Initially, one vertex (chosen uniformly at random) possesses a mutation, with fitness r>1. All other individuals have fitness 1. During each step of the algorithm, an individual is chosen with probability proportional to its fitness, and its state (mutant or non-mutant) is passed on to an out-neighbour which is chosen uniformly at random. If the underlying graph is strongly connected then the algorithm will eventually reach fixation, in which all individuals are mutants, or extinction, in which no individuals are mutants. An infinite family of directed graphs is said to be strongly amplifying if, for every r>1, the extinction probability tends to 0 as the number of vertices increases. A formal definition is provided in the paper. Strong amplification is a rather surprising property - it means that in such graphs, the fixation probability of a uniformly-placed initial mutant tends to 1 even though the initial mutant only has a fixed selective advantage of r>1 (independently of n). The name "strongly amplifying'' comes from the fact that this selective advantage is "amplified''. Strong amplifiers have received quite a bit of attention, and Lieberman et al. proposed two potentially strongly-amplifying families - superstars and metafunnels. Heuristic arguments have been published, arguing that there are infinite families of superstars that are strongly amplifying. The same has been claimed for metafunnels. In this paper, we give the first rigorous proof that there is an infinite family of directed graphs that is strongly amplifying. We call the graphs in the family "megastars''. When the algorithm is run on an n-vertex graph in this family, starting with a uniformly-chosen mutant, the extinction probability is roughly $n^{-1/2}$ (up to logarithmic factors). We prove that all infinite families of superstars and metafunnels have larger extinction probabilities (as a function of n). Finally, we prove that our analysis of megastars is fairly tight - there is no infinite family of megastars such that the Moran algorithm gives a smaller extinction probability (up to logarithmic factors). Also, we provide a counter-example which clarifies the literature concerning the isothermal theorem of Lieberman et al.

Monadic Decomposition

Monadic predicates play a prominent role in many decidable cases, including decision procedures for symbolic automata. We are here interested in discovering whether a formula can be rewritten into a Boolean combination of monadic predicates. Our setting is quantifier-free formulas whose satisfiability is decidable, such as linear arithmetic and we here develop a semi-decision procedure for extracting a monadic decomposition of a formula when it exists.

Coloring 3-colorable graphs with less than n^{1/5} colors

We consider the problem of coloring a 3-colorable graphs in polynomial time using as few colors as possible. We first present a new combinatorial algorithm using $\widetilde O(n^{4/11})$ colors. This is the first combinatorial improvement since Blum's $\widetilde O(n^{3/8})$ bound from FOCS'90. Like Blum's algorithm, our new algorithm composes immediately with recent semi-definite programming approaches, and improves the best bound for polynomial time algorithm for coloring of 3-colorable graphs from $O(n^{0.2072})$ colors by Chlamtac from FOCS'07 to $O(n^{0.2049})$ colors. Next we develop a new recursion tailored for combination with semi-definite approaches, bringing us further down to $O(n^{0.19996})$ colors.

Upward Max Min Fairness

Often one would like to allocate shared resources in a fair way. A common and well studied notion of fairness is {\em Max-Min Fairness}, where we first maximize the smallest allocation, and subject to that the second smallest, and so on. We consider a networking application where multiple commodities compete over the capacity of a network. In our setting each commodity has multiple possible paths to route its demand (for example, a network using MPLS tunneling). In this setting, the only known way of finding a max-min fair allocation requires an iterative solution of multiple linear programs. Such an approach, although polynomial time, scales badly with the size of the network, the number of demands, and the number of paths. More importantly, a network operator has limited control and understanding of the inner working of the algorithm. Finally, this approach is inherently centralized and cannot be implemented via a distributed protocol. In this paper we introduce Upward Max-Min Fairness, a novel relaxation of Max-Min Fairness and present a family of simple dynamics that converge to it. These dynamics can be implemented in a distributed manner. Moreover, we present an efficient combinatorial algorithm for finding an upward max-min fair allocation. This algorithm is a natural extension of the well known Water Filling Algorithm for a multiple path setting. We test the expected behavior of this new algorithm and show that on realistic networks upward max-min fair allocations are comparable to the max-min fair allocations both in fairness and in network utilization.

A Temporal-Logic Approach to Binding-Time Analysis

This paper demonstrates that there is a fundamental relationship between temporal logic and languages that involve multiple stages, such as those used to analyze binding times in the context of partial evaluation. This relationship is based on an extension of the Curry-Howard isomorphism, which identifies proofs with programs, and propositions with types. Our extension involves the ``next time'' (Ë) operator from linear-time temporal logic, and yields a »-calculus that we call »Ë with types of the form ËA for expressions in the subsequent stage, including appropriate introduction and elimination forms. We demonstrate that »Ë is equivalent to the core of a previously studied multi-level binding-time analysis. This is similar to work by Davies and Pfenning on staged computation based on the necessity (¡) operator of modal logic, but ¡ only allows closed code, and naturally supports a code evaluation construct, while Ë captures open code, thus is more flexible, but is incompatible with such a construct. Instead code evaluation is an external global operation that is validated by the proof theory regarding closed proofs of Ë formulas. We demonstrate the relevance of »Ë to staged computation directly by showing that that normalization can be done in an order strictly following the times of the logic. We also extend »Ë to a small functional language, and show that it would serve as a suitable basis for directly programming with multiple stages by presenting some example programs.

Constant-rate coding for multiparty interactive communication is impossible

We study coding schemes for multiparty interactive communication over synchronous networks that suffer from stochastic noise, where each bit is independently flipped with probability µ. We analyze the minimal overhead that must be added by the coding scheme in order to succeed in performing the computation despite the noise. Our main result is a lower bound on the communication of any noise-resilient protocol over a synchronous star network with n-parties (where all parties communicate in every round). Specifically, we show a task that can be solved by communicating T bits over the noise-free network, but for which any protocol with success probability of 1  o(1) must communicate at least ©(T log n / log log n ) bits when the channels are noisy. By a 1994 result of Rajagopalan and Schulman, the slowdown we prove is the highest one can obtain on any topology, up to a log log n factor. We complete our lower bound with a matching coding scheme that achieves the same overhead; thus, the capacity of (synchronous) star networks is ˜(log log n / log n). Our bounds prove that, despite several previous coding schemes with rate ©(1) for certain topologies, no coding scheme with constant rate ©(1) exists for arbitrary n-party noisy networks.

Tight Lower Bounds on Graph Embedding Problems

We prove that unless the Exponential Time Hypothesis (ETH) fails, deciding if there is a homomorphism from graph G to graph H cannot be done in time |V (H)|^o(|V (G)|) . We also show an exponential-time reduction from Graph Homomorphism to Subgraph Isomorphism. This rules out (subject to ETH) a possibility of |V (H)|^o(|V (H)|) -time algorithm deciding if graph G is a subgraph of H. For both problems our lower bounds asymptotically match the running time of brute-force algorithms trying all possible mappings of one graph into another. Thus, our work closes the gap in the known complexity of these fundamental problems. Moreover, as a consequence of our reductions conditional lower bounds follow for other related problems such as Locally Injective Homomorphism, Graph Minors, Topological Graph Minors, Minimum Distortion Embedding and Quadratic Assignment Problem.

Property-Directed Inference of Universal Invariants or Proving Their Absence

We present Universal Property Directed Reachability (PDR), a property-directed semi-algorithm for automatic inference of invariants in a universal fragment of first-order logic. PDR is an extension of Bradley's PDR/IC3 algorithm for inference of propositional invariants. PDR terminates when it either discovers a concrete counterexample, infers an inductive universal invariant strong enough to establish the desired safety property, or finds a proof that such an invariant does not exist. PDR is not guaranteed to terminate. However, we prove that under certain conditions, e.g., when reasoning about programs manipulating singly-linked lists, it does. We implemented an analyzer based on PDR, and applied it to a collection of list-manipulating programs. Our analyzer was able to automatically infer universal invariants strong enough to establish memory safety and certain functional correctness properties, show the absence of such invariants for certain natural programs and specifications, and detect bugs. All this, without the need for user-supplied abstraction predicates.

All ACM Journals | See Full Journal Index

Search JACM
enter search term and/or author name