ACM DL

Journal of the ACM (JACM)

Menu
Latest Articles

Engineering with Logic: Rigorous Test-Oracle Specification and Validation for TCP/IP and the Sockets API

Conventional computer engineering relies on test-and-debug development processes, with the behavior of common interfaces described (at best) with prose specification documents. But prose specifications cannot be used in test-and-debug development in any automated way, and prose is a poor medium for... (more)

The PCL Theorem: Transactions cannot be Parallel, Consistent, and Live

We establish a theorem called the PCL theorem, which states that it is impossible to design a transactional memory algorithm that ensures (1) parallelism, i.e., transactions do not need to synchronize unless they access the same application objects, (2) very little consistency, i.e., a consistency condition, called weak adaptive consistency,... (more)

Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity Learning

We prove that any algorithm for learning parities requires either a memory of quadratic size or an exponential number of samples. This proves a recent conjecture of  Steinhardt et al. (2016) and shows that for some learning problems, a large storage space is crucial. More formally, in the problem of parity learning, an unknown string x... (more)

Deterministic Edge Connectivity in Near-Linear Time

We present a deterministic algorithm that computes the edge-connectivity of a graph in near-linear time. This is for a simple undirected unweighted... (more)

Computing the Homology of Basic Semialgebraic Sets in Weak Exponential Time

We describe and analyze an algorithm for computing the homology (Betti numbers and torsion coefficients) of basic semialgebraic sets that works in... (more)

Scaling Exponential Backoff: Constant Throughput, Polylogarithmic Channel-Access Attempts, and Robustness

Randomized exponential backoff is a widely deployed technique for coordinating access to a shared resource. A good backoff protocol should, arguably, satisfy three natural properties: (1) it should provide constant throughput, wasting as little time as possible; (2) it should require few failed access attempts, minimizing the amount of wasted... (more)

NEWS

About JACM

The Journal of the ACM (JACM) provides coverage of the most significant work on principles of computer science, broadly construed. The scope of research we cover encompasses contributions of lasting value to any area of computer science. To be accepted, a paper must be judged to be truly outstanding in its field.  JACM is interested  in work in core computer science and at the boundaries, both the boundaries of subdisciplines of computer science and the boundaries between computer science and other fields.  READ MORE

Editorial Process

The Journal of the ACM begins the refereeing process with a "quick review", to check whether the manuscript has a plausible chance of meeting JACM's high standards, even if all the claimed results are correct. JACM tries to cover a broad spectrum of areas, and can only accept 4-5 papers in any given area every year. Thus, we try to focus on the most significant papers in each area, that would be of interest to the broad community, and reject many papers that would be accepted by other journals. READ MORE

Important Note on P/NP

Some submissions purport to solve a long-standing open problem in complexity theory, such as the P/NP problem. Many of these turn out to be mistaken, and such submissions tax JACM volunteer editors and reviewers.  READ MORE

Hierarchical Clustering: Objective Functions and Algorithms

Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta (2016) framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a good hierarchical clustering is one that minimizes a particular cost function. He showed that this cost function has certain desirable properties: in order to achieve optimal cost, disconnected components must be separated first and that in cliques, all clusterings achieve the same cost. We take an axiomatic approach to defining good objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions (that includes the one introduced by Dasgupta) having the property that when the input admits a natural ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem, design an algorithm for this scenario and provide experimental evidence of its efficiency.

Pseudorandomness from Shrinkage

One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use of lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer a quantitative loss in parameters, and hence do not give nontrivial implications for models where we dont know super-polynomial lower bounds but do know lower bounds of a fixed polynomial. We show that when such lower bounds are proved using random restrictions, we can construct PRGs which are essentially best possible without in turn improving the lower bounds. More specifically, say that a circuit family has shrinkage exponent “ if a random restriction leaving a p fraction of variables unset shrinks the size of any circuit in the family by a factor of p“+o(1). Our PRG uses a seed of length s1/(“+1)+o(1) to fool circuits in the family of size s. By using this generic construction, we get PRGs with polynomially small error for the following classes of circuits of size s and with the following seed lengths: 1. For de Morgan formulas, seed length s1/3+o(1); 2. For formulas over an arbitrary basis, seed length s1/2+o(1); 3. For read-once de Morgan formulas, seed length s.234...; 4. For branching programs of size s, seed length s1/2+o(1). The previous best PRGs known for these classes used seeds of length bigger than n/2 to output n bits, and worked only for size s = O(n) (Bogdanov, Papakonstantinou, & Wan).

Approximate counting, the Lovasz local lemma, and inference in graphical models

In this paper we introduce a new approach for approximately counting in bounded degree systems with higher-order constraints. Our main result is an algorithm to approximately count the number of solutions to a CNF formula ¦ when the width is logarithmic in the maximum degree. This closes an exponential gap between the known upper and lower bounds. Moreover our algorithm extends straightforwardly to approximate sampling, which shows that under Lovasz Local Lemma-like conditions it is not only possible to find a satisfying assignment, it is also possible to generate one approximately uniformly at random from the set of all satisfying assignments. Our approach is a significant departure from earlier techniques in approximate counting, and is based on a framework to bootstrap an oracle for computing marginal probabilities on individual variables. Finally, we give an application of our results to show that it is algorithmically possible to sample from the posterior distribution in an interesting class of graphical models.

Nonhomogeneous Place-Dependent Markov Chains, Unsynchronized AIMD, and Network Utility Maximization

We present a solution of a class of network utility maximization (NUM) problems using minimal communication. The constraints of the problem are inspired less by TCP-like congestion control but by problems in the area of internet of things and related areas in which the need arises to bring the behavior of a large group of agents to a social optimum. The approach uses only intermittent feedback, no inter-agent communication, and no common clock. The proposed algorithm is a combination of the classical AIMD algorithm in conjunction with a simple probabilistic rule for the agents to respond to a capacity signal. This leads to a nonhomogeneous Markov chain and we show almost sure convergence of this chain to the social optimum.

Towards Verifying Nonlinear Integer Arithmetic

We eliminate a key roadblock to efficient verification of nonlinear integer arithmetic using CDCL SAT solvers, by showing how to construct short resolution proofs for many properties of the most widely used multiplier circuits. Such short proofs were conjectured not to exist. More precisely, we give nO(1) size regular resolution proofs for arbitrary degree 2 identities on array, diagonal, and Booth multipliers and nO(log n) size proofs for these identities on Wallace tree multipliers. We also describe short proofs for equivalence between multipliers.

Near-optimal linear decision trees for k-SUM and related problems

We construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry. For example, for any constant $k$, we construct linear decision trees that solve the $k$-SUM problem on $n$ elements using $O(n \log^2 n)$ linear queries. Moreover, the queries we use are comparison queries, which compare the sums of two $k$-subsets; when viewed as linear queries, comparison queries are $2k$-sparse and have only $\{-1,0,1\}$ coefficients. We give similar constructions for sorting sumsets $A+B$ and for solving the SUBSET-SUM problem, both with optimal number of queries, up to poly-logarithmic terms. Our constructions are based on the notion of ``inference dimension", recently introduced by the authors in the context of active classification with comparison queries. This can be viewed as another contribution to the fruitful link between machine learning and discrete geometry, which goes back to the discovery of the VC dimension.

Shellability is NP-complete

We prove that for every d e 2, deciding if a pure, d-dimensional, simplicial complex is shellable is NP-hard, hence NP-complete. This resolves a question raised, e.g., by Danaraj and Klee in 1978. Our reduction also yields that for every d e 2 and k e 0, deciding if a pure, d-dimensional, simplicial complex is k-decomposable is NP-hard. For d e 3, both problems remain NP-hard when restricted to contractible pure d-dimensional complexes. Another simple corollary of our result is that it is NP-hard to decide whether a given poset is CL-shellable.

On the decidability of membership in matrix-exponential semigroups

Given k+1 square matrices A_1, ..., A_k, C, all of the same dimension and with real algebraic coordinates, we examine the problem of deciding whether there exist matrices of the form exp(A_i t_j) whose product equals C. Our results have applications to reachability problems for linear hybrid automata. Our decidability proof relies on a number of theorems from algebraic and transcendental number theory, most notably those of Baker, Lindemann, and Masser, as well as some useful geometric and linear-algebraic results, including the Minkowski-Weyl theorem and a new (to the best of our knowledge) result about the uniqueness of strictly upper triangular matrix logarithms of upper unitriangular matrices. On the other hand, our undecidability results are shown by reduction from Hilbert's Tenth Problem, through a series of matrix constructions.

A theory of NP-completeness and ill-conditioning for approximate real computations.

We develop a complexity theory for approximate real computations. We first produce a theory for exact computations but with condition numbers. The input size depends on a condition number, which is not assumed known by the machine. The theory admits deterministic and nondeterministic polynomial time recognizable problems. We prove that P is not NP in this theory if and only if P is not NP in the BSS theory over the reals. Then we develop a theory with weak and strong approximate computations. This theory is intended to model actual numerical computations that are usually performed in floating point arithmetic. It admits classes P and NP and also an NP-complete problem. We relate the P vs NP question in this new theory to the classical P vs NP problem.

Bar Induction is Compatible with Constructive Type Theory

Powerful yet effective induction principles play an important role in computing, being a paramount component of programming languages, automated reasoning and program verification systems. The Bar Induction principle is a fundamental concept of intuitionism, which is equivalent to the standard principle of transfinite induction. In this work we investigate the compatibility of several variants of Bar Induction with Constructive Type Theory (CTT), a dependent type theory in the spirit of Martin-Lof's extensional theory. We first show that CTT is compatible with a Bar Induction principle for sequences of numbers. Then, we establish the compatibility of CTT with a more general Bar Induction principle for sequences of name-free closed terms. The formalization of the latter principle within the theory involved enriching CTT's term syntax with a limit constructor and showing that consistency is preserved. Furthermore, we provide novel insights regarding Bar Induction, such as the non-truncated version of Bar Induction on monotone bars being intuitionistically false. These enhancements are carried out formally using the Nuprl proof assistant which implements CTT, and the formalization of CTT within the Coq proof assistant presented in previous works.

Parallel Bayesian Search with no Coordination

Coordinating the actions of agents (e.g., volunteers analyzing radio signals in [email protected]) yields efficient search algorithms. However, such an efficiency is often at the cost of implementing complex coordination mechanisms which may be expensive in term of communication and/or computation overheads. Instead, non-coordinating algorithms, in which each agent operates independently from the others, are typically very simple, and easy to implement. They are also inherently robust to slight misbehaviors, or even crashes of agents. In this paper, we investigate the ``price of non-coordinating'', in term of search performances, and we show that this price is actually quite small. Specifically, we consider a parallel version of a classical Bayesian search problem, where set of k e 1 searchers are looking for a treasure placed in one of the boxes indexed by positive integers, according to some distribution p. Each searcher can open a random box at each step, and the objective is to find the treasure in a minimum number of steps. We show that there is a very simple non-coordinating algorithm which has expected running time at most 4(1 - 1/(k+1))^2 * OPT + 10, where OPT is the expected running time of the best fully coordinated algorithm. Our algorithm does not even use the precise description of the distribution p, but only the relative likelihood of the boxes. We prove that, under this restriction, our algorithm has the best possible competitive ratio with respect to OPT. For the case where a complete description of the distribution p is given to the search algorithm, we describe an optimal non-coordinating algorithm for Bayesian search. This latter algorithm can be twice as fast as our former algorithm in practical scenarios such as uniform distributions. All these results provide a complete characterization of non-coordinating Bayesian search. The take-away message is that, for their simplicity and robustness, non-coordinating algorithms are viable alternatives to complex coordinating mechanisms subject to significant overheads. Most of these results apply as well to \emph{linear} search, in which the indices of the boxes reflect their relative importance, and where important boxes must be visited first.

Uniform Sampling through the Lovász Local Lemma

We propose a new algorithmic framework, called "partial rejection sampling'', to draw samples exactly from a product distribution, conditioned on none of a number of bad events occurring. Our framework builds new connections between the variable framework of the Lovász Local Lemma and some classical sampling algorithms such as the ``cycle-popping'' algorithm for rooted spanning trees by Wilson (1996). Among other applications, we discover new algorithms to sample satisfying assignments of k-CNF formulas with bounded variable occurrences.

On the complexity of hazard-free circuits

The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems.

Bandits and Experts in Metric Spaces

In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions. In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the "Lipschitz MAB problem". We present a solution for the multi-armed bandit problem in this setting. That is, for every metric space we define an isometry invariant which bounds from below the performance of Lipschitz MAB algorithms for this metric space, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions. We also address the full-feedback (``best expert'') version of the problem, where after every round the payoffs from all arms are revealed.

From Real-Time Logic to Timed Automata

We show how to transform formulae written in the real-time temporal logic MITL into timed automata recognizing their satisfying models, taken to be continuous-time Boolean signals. This compositional construction is much simpler than previously known; it supports both past and future operators and can easily be extended.

All ACM Journals | See Full Journal Index

Search JACM
enter search term and/or author name