ACM DL

Journal of the ACM (JACM)

Menu
NEWS

About JACM

The Journal of the ACM (JACM) provides coverage of the most significant work on principles of computer science, broadly construed. The scope of research we cover encompasses contributions of lasting value to any area of computer science. To be accepted, a paper must be judged to be truly outstanding in its field.  JACM is interested  in work in core computer science and at the boundaries, both the boundaries of subdisciplines of computer science and the boundaries between computer science and other fields.  READ MORE

Editorial Process

The Journal of the ACM begins the refereeing process with a "quick review", to check whether the manuscript has a plausible chance of meeting JACM's high standards, even if all the claimed results are correct. JACM tries to cover a broad spectrum of areas, and can only accept 4-5 papers in any given area every year. Thus, we try to focus on the most significant papers in each area, that would be of interest to the broad community, and reject many papers that would be accepted by other journals. READ MORE

Important Note on P/NP

Some submissions purport to solve a long-standing open problem in complexity theory, such as the P/NP problem. Many of these turn out to be mistaken, and such submissions tax JACM volunteer editors and reviewers.  READ MORE

The Weisfeiler-Leman Dimension of Planar Graphs is at most 3

We prove that the Weisfeiler-Leman (WL) dimension of the class of all finite planar graphs is at most 3. In particular, every finite planar graph is definable in first-order logic with counting using at most 4 variables. The previously best known upper bounds for the dimension and number of variables were 14 and 15, respectively. First we show that, for dimension 3 and higher, the WL-algorithm correctly tests isomorphism of graphs in a minor-closed class whenever it determines the orbits of the automorphism group of any arc-colored 3-connected graph belonging to this class. Then we prove that, apart from several exceptional graphs (which have WL-dimension at most 2), the individualization of two correctly chosen vertices of a colored 3-connected planar graph followed by the 1-dimensional WL-algorithm produces the discrete vertex partition. This implies that the 3-dimensional WL-algorithm determines the orbits of a colored 3-connected planar graph. As a byproduct of the proof, we get a classification of the 3-connected planar graphs with fixing number 3.

On the Parameterized Complexity of Approximating Dominating Set

We study the parameterized complexity of approximating the k-Dominating Set (DomSet) problem where an integer k and a graph G on n vertices are given as input, and the goal is to find a dominating set of size at most F(k)·k whenever the graph G has a dominating set of size k. When such an algorithm runs in time T(k)·poly(n) (i.e., FPT-time) for some computable function T, it is said to be an F(k)-FPT-approximation algorithm for k-DomSet. Whether such an algorithm exists is listed in the seminal book of Downey and Fellows (2013) as one of the "most infamous" open problems in Parameterized Complexity. This work gives an almost complete answer to this question by showing the non-existence of such an algorithm under W[1] ` FPT and further providing tighter running time lower bounds under stronger hypotheses. Specifically, we prove the following for every computable functions T, F, and every constant µ > 0: Ï Assuming W[1] ` FPT, there is no F(k)-FPT-approximation algorithm for k-DomSet. Ï Assuming the Exponential Time Hypothesis (ETH), there is no F(k)-approximation algorithm for k-DomSet that runs in T(k)·no(k) time. Ï Assuming the Strong Exponential Time Hypothesis (SETH), for every integer k e 2, there is no F(k)-approximation algorithm for k-DomSet that runs in T(k)·nk-µ time. Ï Assuming the k-sum Hypothesis, for every integer k e 3, there is no F(k)-approximation algorithm for k-DomSet that runs in T(k)·n(k+1)/2 - µ time. Previously, only constant ratio FPT-approximation algorithms were ruled out under W[1] ` FPT and (log1/4 - µ k)-FPT-approximation algorithms were ruled out under ETH [Chen and Lin, FOCS 2016]. Recently, the non-existence of an F(k)-FPT-approximation algorithm for any function F was shown under gap ETH [Chalermsook et al., FOCS 2017]. Note that, to the best of our knowledge, no running time lower bound of the form n´k for any absolute constant ´ > 0 was known before even for any constant factor inapproximation ratio. Our results are obtained by establishing a connection between communication complexity and hardness of approximation, generalizing the ideas from a recent breakthrough work of Abboud et al. [FOCS 2017]. Specifically, we show that to prove hardness of approximation of a certain parameterized variant of the label cover problem, it suffices to devise a specific protocol for a communication problem that depends on which hypothesis we rely on. Each of these communication problems turns out to be either a well studied problem or a variant of one; this allows us to easily apply known techniques to solve them.

White-Box vs. Black-Box Complexity of Search Problems: Ramsey and Graph Property Testing

Ramsey theory assures us that in any graph there is a clique or independent set of a certain size, roughly logarithmic in the graph size. But how difficult is it to find the clique or independent set? If the graph is given explicitly, then it is possible to do so while examining a linear number of edges. If the graph is given by a black-box, where to figure out whether a certain edge exists the box should be queried, then a large number of queries must be issued. But what if one is given a program or circuit for computing the existence of an edge? This problem was raised by Buss and Goldberg and Papadimitriou in the context of TFNP, search problems with a guaranteed solution. We examine the relationship between black-box complexity and white-box complexity for search problems with guaranteed solution such as the above Ramsey problem. We show that under the assumption that collision resistant hash function exist (which follows from the hardness of problems such as factoring, discrete-log and learning with errors) the white-box Ramsey problem is hard and this is true even if one is looking for a much smaller clique or independent set than the theorem guarantees. This is also true for the colorful Ramsey problem where one is looking, say, for a monochromatic triangle. In general, one cannot hope to translate all black-box hardness for TFNP into white-box hardness: we show this by adapting results concerning the random oracle methodology and the impossibility of instantiating it. Another model we consider is that of succinct black-box, where there is a limitation on the size of the black-box (but no limitation on the computation time). In this case we show that for all TFNP problems there is an upper bound proportional to the description size of the box times the solution size. On the other hand, for promise problems this is not the case. Finally, we consider the complexity of graph property testing in the white-box model. We show a property which is hard to test even when one is given the program for computing the graph (under the appropriate assumptions such as hardness of Decisional Diffie-Hellman). The hard property is whether the graph is a two-source extractor.

The Moser-Tardos Framework with Partial Resampling

Self-stabilising Byzantine Clock Synchronisation is Almost as Easy as Consensus

We give fault-tolerant algorithms for establishing synchrony in distributed systems in which each of the $n$ nodes has its own clock. Our algorithms operate in a very strong fault model: we require self-stabilisation, i.e., the initial state of the system may be arbitrary, and there can be up to $f < n/3$ ongoing Byzantine faults, i.e., nodes that deviate from the protocol in an arbitrary manner. Furthermore, we assume that the local clocks of the nodes may progress at different speeds (clock drift) and communication has bounded delay. In this model, we study the pulse synchronisation problem, where the task is to guarantee that eventually all correct nodes generate well-separated local pulse events (i.e., unlabelled logical clock ticks) in a synchronised manner. Compared to prior work, we achieve exponential improvements in stabilisation time and the number of communicated bits, and give the first sublinear-time algorithm for the problem: -- In the deterministic setting, the state-of-the-art solutions stabilise in time $\Theta(f)$ and have each node broadcast $\Theta(f \log f)$ bits per time unit. We exponentially reduce the number of bits broadcasted per time unit to $\Theta(\log f)$ while retaining the same stabilisation time. -- In the randomised setting, the state-of-the-art solutions stabilise in time $\Theta(f)$ and have each node broadcast $O(1)$ bits per time unit. We exponentially reduce the stabilisation time to $\polylog f$ while each node broadcasts $\polylog f$ bits per time unit. These results are obtained by means of a recursive approach reducing the above task of self-stabilising pulse synchronisation in the bounded-delay model to non-self-stabilising binary consensus in the synchronous model. In general, our approach introduces at most logarithmic overheads in terms of stabilisation time and broadcasted bits over the underlying consensus routine.

Online bipartite matching with amortized O(log2 n) replacements

In the online bipartite matching problem with replacements, all the vertices on one side of the bipartition are given, and the vertices on the other side arrive one by one with all their incident edges. The goal is to maintain a maximum matching while minimizing the number of changes (replacements) to the matching. We show that the greedy algorithm that always takes the shortest augmenting path from the newly inserted vertex (denoted the SAP protocol) uses at most amortized O(log2 n) replacements per insertion, where n is the total number of vertices inserted. This is the first analysis to achieve a polylogarithmic number of replacements for any replacement strategy, almost matching the ©(log n) lower bound. The previous best known strategy achieved amortized O((n)) replacements [Bosek, Leniowski, Sankowski, Zych, FOCS 2014]. For the SAP protocol in particular, nothing better than then trivial O(n) bound was known except in special cases. Our analysis immediately implies the same upper bound of O(log2 n) reassignments for the capacitated assignment problem, where each vertex on the static side of the bipartition is initialized with the capacity to serve a number of vertices. We also analyze the problem of minimizing the maximum server load. We show that if the final graph has maximum server load L, then the SAP protocol makes amortized O(min{L log2 n , (n) log n}) reassignments. We also show that this is close to tight because ©(min{L, (n)}) reassignments can be necessary.

On the complexity of hazard-free circuits

The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems.

All ACM Journals | See Full Journal Index

Search JACM
enter search term and/or author name