ACM DL

Journal of the ACM (JACM)

Menu
NEWS

About JACM

The Journal of the ACM (JACM) provides coverage of the most significant work on principles of computer science, broadly construed. The scope of research we cover encompasses contributions of lasting value to any area of computer science. To be accepted, a paper must be judged to be truly outstanding in its field.  JACM is interested  in work in core computer science and at the boundaries, both the boundaries of subdisciplines of computer science and the boundaries between computer science and other fields.  READ MORE

Editorial Process

The Journal of the ACM begins the refereeing process with a "quick review", to check whether the manuscript has a plausible chance of meeting JACM's high standards, even if all the claimed results are correct. JACM tries to cover a broad spectrum of areas, and can only accept 4-5 papers in any given area every year. Thus, we try to focus on the most significant papers in each area, that would be of interest to the broad community, and reject many papers that would be accepted by other journals. READ MORE

Important Note on P/NP

Some submissions purport to solve a long-standing open problem in complexity theory, such as the P/NP problem. Many of these turn out to be mistaken, and such submissions tax JACM volunteer editors and reviewers.  READ MORE

Hierarchical Clustering: Objective Functions and Algorithms

Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta (2016) framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a good hierarchical clustering is one that minimizes a particular cost function. He showed that this cost function has certain desirable properties: in order to achieve optimal cost, disconnected components must be separated first and that in cliques, all clusterings achieve the same cost. We take an axiomatic approach to defining good objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions (that includes the one introduced by Dasgupta) having the property that when the input admits a natural ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem, design an algorithm for this scenario and provide experimental evidence of its efficiency.

The Salesman's Improved Paths Through Forests

We give a new, strongly polynomial-time algorithm and improved analysis for the metric $s-t$ path TSP. It finds a tour of cost less than 1.53 times the optimum of the subtour elimination LP, while known examples show that 1.5 is a lower bound for the integrality gap. A key new idea is the deletion of some edges of Christofides' trees, which is then accompanied by novel arguments of the analysis: edge-deletion disconnects the trees, which are then partly reconnected by ``parity correction''. We show that the arising ``connectivity correction'' can be achieved for a minor extra cost. On the one hand this algorithm and analysis extend previous tools such as the best-of-many Christofides algorithm. On the other hand, powerful new tools are solicited, such as a flow problem for analyzing the reconnection cost, and the construction of a set of more and more restrictive spanning trees, each of which can still be found by the greedy algorithm. We show that these trees can replace the convex combination of spanning trees in the best-of-may Christofides algorithm. These new methods lead to improving the integrality ratio and approximation guarantee below 1.53, as it is already sketched in the preliminary shortened version of this article that appeared in FOCS 2016. The algorithm and analysis have been significantly simplified in the current article, and details of proofs and explanations have been added.

Nonhomogeneous Place-Dependent Markov Chains, Unsynchronized AIMD, and Network Utility Maximization

We present a solution of a class of network utility maximization (NUM) problems using minimal communication. The constraints of the problem are inspired less by TCP-like congestion control but by problems in the area of internet of things and related areas in which the need arises to bring the behavior of a large group of agents to a social optimum. The approach uses only intermittent feedback, no inter-agent communication, and no common clock. The proposed algorithm is a combination of the classical AIMD algorithm in conjunction with a simple probabilistic rule for the agents to respond to a capacity signal. This leads to a nonhomogeneous Markov chain and we show almost sure convergence of this chain to the social optimum.

Towards Verifying Nonlinear Integer Arithmetic

We eliminate a key roadblock to efficient verification of nonlinear integer arithmetic using CDCL SAT solvers, by showing how to construct short resolution proofs for many properties of the most widely used multiplier circuits. Such short proofs were conjectured not to exist. More precisely, we give nO(1) size regular resolution proofs for arbitrary degree 2 identities on array, diagonal, and Booth multipliers and nO(log n) size proofs for these identities on Wallace tree multipliers. We also describe short proofs for equivalence between multipliers.

The Weisfeiler-Leman Dimension of Planar Graphs is at most 3

We prove that the Weisfeiler-Leman (WL) dimension of the class of all finite planar graphs is at most 3. In particular, every finite planar graph is definable in first-order logic with counting using at most 4 variables. The previously best known upper bounds for the dimension and number of variables were 14 and 15, respectively. First we show that, for dimension 3 and higher, the WL-algorithm correctly tests isomorphism of graphs in a minor-closed class whenever it determines the orbits of the automorphism group of any arc-colored 3-connected graph belonging to this class. Then we prove that, apart from several exceptional graphs (which have WL-dimension at most 2), the individualization of two correctly chosen vertices of a colored 3-connected planar graph followed by the 1-dimensional WL-algorithm produces the discrete vertex partition. This implies that the 3-dimensional WL-algorithm determines the orbits of a colored 3-connected planar graph. As a byproduct of the proof, we get a classification of the 3-connected planar graphs with fixing number 3.

On the Parameterized Complexity of Approximating Dominating Set

We study the parameterized complexity of approximating the k-Dominating Set (DomSet) problem where an integer k and a graph G on n vertices are given as input, and the goal is to find a dominating set of size at most F(k)·k whenever the graph G has a dominating set of size k. When such an algorithm runs in time T(k)·poly(n) (i.e., FPT-time) for some computable function T, it is said to be an F(k)-FPT-approximation algorithm for k-DomSet. Whether such an algorithm exists is listed in the seminal book of Downey and Fellows (2013) as one of the "most infamous" open problems in Parameterized Complexity. This work gives an almost complete answer to this question by showing the non-existence of such an algorithm under W[1] ` FPT and further providing tighter running time lower bounds under stronger hypotheses. Specifically, we prove the following for every computable functions T, F, and every constant µ > 0: Ï Assuming W[1] ` FPT, there is no F(k)-FPT-approximation algorithm for k-DomSet. Ï Assuming the Exponential Time Hypothesis (ETH), there is no F(k)-approximation algorithm for k-DomSet that runs in T(k)·no(k) time. Ï Assuming the Strong Exponential Time Hypothesis (SETH), for every integer k e 2, there is no F(k)-approximation algorithm for k-DomSet that runs in T(k)·nk-µ time. Ï Assuming the k-sum Hypothesis, for every integer k e 3, there is no F(k)-approximation algorithm for k-DomSet that runs in T(k)·n(k+1)/2 - µ time. Previously, only constant ratio FPT-approximation algorithms were ruled out under W[1] ` FPT and (log1/4 - µ k)-FPT-approximation algorithms were ruled out under ETH [Chen and Lin, FOCS 2016]. Recently, the non-existence of an F(k)-FPT-approximation algorithm for any function F was shown under gap ETH [Chalermsook et al., FOCS 2017]. Note that, to the best of our knowledge, no running time lower bound of the form n´k for any absolute constant ´ > 0 was known before even for any constant factor inapproximation ratio. Our results are obtained by establishing a connection between communication complexity and hardness of approximation, generalizing the ideas from a recent breakthrough work of Abboud et al. [FOCS 2017]. Specifically, we show that to prove hardness of approximation of a certain parameterized variant of the label cover problem, it suffices to devise a specific protocol for a communication problem that depends on which hypothesis we rely on. Each of these communication problems turns out to be either a well studied problem or a variant of one; this allows us to easily apply known techniques to solve them.

Shellability is NP-complete

We prove that for every d e 2, deciding if a pure, d-dimensional, simplicial complex is shellable is NP-hard, hence NP-complete. This resolves a question raised, e.g., by Danaraj and Klee in 1978. Our reduction also yields that for every d e 2 and k e 0, deciding if a pure, d-dimensional, simplicial complex is k-decomposable is NP-hard. For d e 3, both problems remain NP-hard when restricted to contractible pure d-dimensional complexes. Another simple corollary of our result is that it is NP-hard to decide whether a given poset is CL-shellable.

On the decidability of membership in matrix-exponential semigroups

Given k+1 square matrices A_1, ..., A_k, C, all of the same dimension and with real algebraic coordinates, we examine the problem of deciding whether there exist matrices of the form exp(A_i t_j) whose product equals C. Our results have applications to reachability problems for linear hybrid automata. Our decidability proof relies on a number of theorems from algebraic and transcendental number theory, most notably those of Baker, Lindemann, and Masser, as well as some useful geometric and linear-algebraic results, including the Minkowski-Weyl theorem and a new (to the best of our knowledge) result about the uniqueness of strictly upper triangular matrix logarithms of upper unitriangular matrices. On the other hand, our undecidability results are shown by reduction from Hilbert's Tenth Problem, through a series of matrix constructions.

A theory of NP-completeness and ill-conditioning for approximate real computations.

We develop a complexity theory for approximate real computations. We first produce a theory for exact computations but with condition numbers. The input size depends on a condition number, which is not assumed known by the machine. The theory admits deterministic and nondeterministic polynomial time recognizable problems. We prove that P is not NP in this theory if and only if P is not NP in the BSS theory over the reals. Then we develop a theory with weak and strong approximate computations. This theory is intended to model actual numerical computations that are usually performed in floating point arithmetic. It admits classes P and NP and also an NP-complete problem. We relate the P vs NP question in this new theory to the classical P vs NP problem.

Invited Articles Foreword for 66:3

On the Computability of Conditional Probability

As inductive inference and machine learning methods in computer science see continued success, researchers are aiming to describe even more complex probabilistic models and inference algorithms. What are the limits of mechanizing probabilistic inference? We investigate the computability of conditional probability, a fundamental notion in probability theory and a cornerstone of Bayesian statistics, and show that there are computable joint distributions with noncomputable conditional distributions, ruling out the prospect of general inference algorithms, even inefficient ones. Specifically, we construct a pair of computable random variables in the unit interval such that the conditional distribution of the first variable given the second encodes the halting problem. Nevertheless, probabilistic inference is possible in many common modeling settings, and we prove several results giving broadly applicable conditions under which conditional distributions are computable. In particular, conditional distributions become computable when measurements are corrupted by independent computable noise with a sufficiently smooth density.

On the complexity of hazard-free circuits

The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems.

Approximation Schemes for Independent Set and Sparse Subsets of Polygons

We present a $(1+\eps)$-approximation algorithm with quasi-polynomial running time for computing a maximum weight independent set of polygons out of a given set of polygons in the plane. Contrasting this, the best known polynomial time algorithm for the problem has an approximation ratio of~$n^{\eps}$. Surprisingly, we can extend the algorithm to the problem of computing the maximum cardinality subset of the given set of polygons whose intersection graph fulfills some sparsity condition. For example, we show that one can approximate the maximum subset of polygons, such that the intersection graph of the subset is planar or does not contain a cycle of length $4$ (i.e., $K_{2,2}$). Our algorithm relies on a recursive partitioning scheme, whose backbone is the existence of balanced cuts with small complexity that intersect polygons from the optimal solution of a small total weight. For the case of large axis-parallel rectangles, we provide a polynomial time $(1+\eps)$-approximation for the maximum weight independent set. Specifically, we consider the problem where each rectangle has one edge whose length is at least a constant fraction of the length of the corresponding edge of the bounding box of all the input elements. This is now the most general case for which a PTAS is known, and it requires a new and involved partitioning scheme, which should be of independent interest.

Bandits and Experts in Metric Spaces

In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions. In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the "Lipschitz MAB problem". We present a solution for the multi-armed bandit problem in this setting. That is, for every metric space we define an isometry invariant which bounds from below the performance of Lipschitz MAB algorithms for this metric space, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions. We also address the full-feedback (``best expert'') version of the problem, where after every round the payoffs from all arms are revealed.

From Real-Time Logic to Timed Automata

We show how to transform formulae written in the real-time temporal logic MITL into timed automata recognizing their satisfying models, taken to be continuous-time Boolean signals. This compositional construction is much simpler than previously known; it supports both past and future operators and can easily be extended.

All ACM Journals | See Full Journal Index

Search JACM
enter search term and/or author name