ACM DL

Journal of the ACM (JACM)

Menu
NEWS

About JACM

The Journal of the ACM (JACM) provides coverage of the most significant work on principles of computer science, broadly construed. The scope of research we cover encompasses contributions of lasting value to any area of computer science. To be accepted, a paper must be judged to be truly outstanding in its field.  JACM is interested  in work in core computer science and at the boundaries, both the boundaries of subdisciplines of computer science and the boundaries between computer science and other fields.  READ MORE

Editorial Process

The Journal of the ACM begins the refereeing process with a "quick review", to check whether the manuscript has a plausible chance of meeting JACM's high standards, even if all the claimed results are correct. JACM tries to cover a broad spectrum of areas, and can only accept 4-5 papers in any given area every year. Thus, we try to focus on the most significant papers in each area, that would be of interest to the broad community, and reject many papers that would be accepted by other journals. READ MORE

Important Note on P/NP

Some submissions purport to solve a long-standing open problem in complexity theory, such as the P/NP problem. Many of these turn out to be mistaken, and such submissions tax JACM volunteer editors and reviewers.  READ MORE

Tight bounds for undirected graph exploration with pebbles and multiple agents

We study the problem of deterministically exploring an undirected and initially unknown graph with n vertices either by a single agent equipped with a set of pebbles, or by a set of collaborating agents. The vertices of the graph are unlabeled and cannot be distinguished by the agents, but the edges incident to a vertex have locally distinct labels. The graph is explored when all vertices are visited by at least one agent. In this setting, it is known that for a single agent without pebbles ˜(log n) bits of memory are necessary and sufficient to explore any graph with at most n vertices. We are interested in how the memory requirement decreases as the agent may mark vertices by dropping and retrieving distinguishable pebbles, or when multiple agents jointly explore the graph. We give tight results for both questions showing that for a single agent with constant memory ˜(log log n) pebbles are necessary and sufficient for exploration. We further prove that using collaborating agents instead of pebbles does not help as ˜(log log n) agents with constant bits of memory each are necessary and sufficient for exploration. For the upper bounds, we devise an algorithm for a single agent with constant memory that explores any n-vertex graph using O(log log n) pebbles, even when n is not known a priori. The algorithm terminates after polynomial time and returns to the starting vertex. Since an additional agent is at least as powerful as a pebble, this implies that O(log log n) agents with constant memory can explore any n-vertex graph. For the lower bound, we show that the number of agents needed for exploring any graph with at most n vertices is already ©(log log n) when we allow each agent to have at most O(log n^(1-µ)) bits of memory for some µ>0. This also implies that a single agent with sublogarithmic memory needs ˜(log log n) pebbles to explore any n-vertex graph.

Certain Answers Meet Zero-One Laws

Query answering over incomplete data invariably relies on the standard notion of certain answers which gives a very coarse classification of query answers into those that are certain and those that are not. Our goal is to refine it by measuring how close an answer is to certainty. This measure is defined as the probability that the query is true under a random interpretation of missing information in a database. Since there are infinitely many such interpretations, to pick one at random we adopt the approach used in the study of asymptotic properties and 0--1 laws for logical sentences, and define the measure as the limit of a sequence. We prove that without any restrictions imposed, the standard model of missing data admits the 0--1 law. That is, the limit always exists and can be only 0 or 1 for a very large class of queries. In other words, query answers are either almost certainly true, or almost certainly false. We show that almost certainly true answers are precisely those returned by the naive evaluation of the query. When restrictions are imposed and databases are required to satisfy constraints, the measure is the conditional probability of the query being true if the constraints are true. This too is defined as a limit; we prove that it always exists, can be an arbitrary rational number, and is computable. For some constraints, such as functional dependencies, the 0-1 law continues to hold. We also look at evaluation procedures based on many-valued logics, as used in relational database systems that handle incomplete information. We identify conditions when such evaluation procedures return almost certainly true answers, and explain reasons why real-life DBMSs break such conditions and can thus return arbitrarily bad answers. As another refinement of the notion of certainty, we introduce a comparison of query answers: an answer with a larger set of interpretations that make it true is better. We identify the precise complexity of such comparisons, and of finding sets of best answers, for first-order queries.

The Weisfeiler-Leman Dimension of Planar Graphs is at most 3

We prove that the Weisfeiler-Leman (WL) dimension of the class of all finite planar graphs is at most 3. In particular, every finite planar graph is definable in first-order logic with counting using at most 4 variables. The previously best known upper bounds for the dimension and number of variables were 14 and 15, respectively. First we show that, for dimension 3 and higher, the WL-algorithm correctly tests isomorphism of graphs in a minor-closed class whenever it determines the orbits of the automorphism group of any arc-colored 3-connected graph belonging to this class. Then we prove that, apart from several exceptional graphs (which have WL-dimension at most 2), the individualization of two correctly chosen vertices of a colored 3-connected planar graph followed by the 1-dimensional WL-algorithm produces the discrete vertex partition. This implies that the 3-dimensional WL-algorithm determines the orbits of a colored 3-connected planar graph. As a byproduct of the proof, we get a classification of the 3-connected planar graphs with fixing number 3.

Online bipartite matching with amortized O(log2 n) replacements

In the online bipartite matching problem with replacements, all the vertices on one side of the bipartition are given, and the vertices on the other side arrive one by one with all their incident edges. The goal is to maintain a maximum matching while minimizing the number of changes (replacements) to the matching. We show that the greedy algorithm that always takes the shortest augmenting path from the newly inserted vertex (denoted the SAP protocol) uses at most amortized O(log2 n) replacements per insertion, where n is the total number of vertices inserted. This is the first analysis to achieve a polylogarithmic number of replacements for any replacement strategy, almost matching the ©(log n) lower bound. The previous best known strategy achieved amortized O((n)) replacements [Bosek, Leniowski, Sankowski, Zych, FOCS 2014]. For the SAP protocol in particular, nothing better than then trivial O(n) bound was known except in special cases. Our analysis immediately implies the same upper bound of O(log2 n) reassignments for the capacitated assignment problem, where each vertex on the static side of the bipartition is initialized with the capacity to serve a number of vertices. We also analyze the problem of minimizing the maximum server load. We show that if the final graph has maximum server load L, then the SAP protocol makes amortized O(min{L log2 n , (n) log n}) reassignments. We also show that this is close to tight because ©(min{L, (n)}) reassignments can be necessary.

On the complexity of hazard-free circuits

The problem of constructing hazard-free Boolean circuits dates back to the 1940s and is an important problem in circuit design. Our main lower-bound result unconditionally shows the existence of functions whose circuit complexity is polynomially bounded while every hazard-free implementation is provably of exponential size. Previous lower bounds on the hazard-free complexity were only valid for depth 2 circuits. The same proof method yields that every subcubic implementation of Boolean matrix multiplication must have hazards. These results follow from a crucial structural insight: Hazard-free complexity is a natural generalization of monotone complexity to all (not necessarily monotone) Boolean functions. Thus, we can apply known monotone complexity lower bounds to find lower bounds on the hazard-free complexity. We also lift these methods from the monotone setting to prove exponential hazard-free complexity lower bounds for non-monotone functions. As our main upper-bound result we show how to efficiently convert a Boolean circuit into a bounded-bit hazard-free circuit with only a polynomially large blow-up in the number of gates. Previously, the best known method yielded exponentially large circuits in the worst case, so our algorithm gives an exponential improvement. As a side result we establish the NP-completeness of several hazard detection problems.

Deciding Context Unification

Contexts are terms with one `hole', i.e. a place in which we can substitute an~argument. In context unification we are given an equation over terms with variables representing contexts and ask about the satisfiability of this equation. Context unification is a natural subvariant of second-order unification, which is undecidable, and a generalization of word equations, which are decidable, at the same time. It is the unique problem between those two whose decidability remained unknown (for already almost two decades). In this paper we show that the context unification is in PSPACE and in EXPTIME, when tree regular constraints are also allowed. Those results are obtained by extending the recompression technique, recently developed by the author and used in particular to obtain a new PSPACE algorithm for satisfiability of word equations, to context unification. The recompression is based on performing simple compression rules (replacing pairs of neighbouring function symbols), which are (conceptually) applied on the solution of the context equation and modifying the equation in a way so that such compression steps can be performed directly on the equation, without the knowledge of the actual solution. The crucial property is that when the compression operation is chosen in appropriate way, the instance stays polynomial-size.

An operational characterization of mutual information in algorithmic information theory

We show that the mutual information, in the sense of Kolmogorov complexity, of any pair of strings x and y is equal, up to logarithmic precision, to the length of the longest shared secret key that two parties, one having x and the complexity profile of the pair and the other one having y and the complexity profile of the pair, can establish via a probabilistic protocol with interaction on a public channel. For L > 2, the longest shared secret that can be established from a tuple of strings (x_1, ..., x_L ) by L parties, each one having one component of the tuple and the complexity profile of the tuple, is equal, up to logarithmic precision, to the complexity of the tuple minus the minimum communication necessary for distributing the tuple to all parties. We establish the communication complexity of secret key agreement protocols that produce a secret key of maximal length, for protocols with public randomness. We also show that if the communication complexity drops below the established threshold then only very short secret keys can be obtained.

All ACM Journals | See Full Journal Index

Search JACM
enter search term and/or author name