ACM DL

Journal of the ACM (JACM)

Menu
Latest Articles

Rumor Spreading and Conductance

In this article, we study the completion time of the PUSH-PULL variant of rumor spreading, also known as randomized broadcast. We show that if a network has n nodes and conductance φ then, with high probability, PUSH-PULL will deliver the message to all nodes in the graph within O(log n/φ) many communication rounds. This bound is best... (more)

Path ORAM: An Extremely Simple Oblivious RAM Protocol

We present Path ORAM, an extremely simple Oblivious RAM protocol with a small amount of client storage. Partly due to its simplicity, Path ORAM is the most practical ORAM scheme known to date with small client storage. We formally prove that Path ORAM has a O(log N) bandwidth cost for blocks of size... (more)

Distributed (Δ +1)-Coloring in Sublogarithmic Rounds

We give a new randomized distributed algorithm for (Δ +1)-coloring in the LOCAL model, running in O(&sqrt; log Δ)+ 2O(&sqrt;log log n)... (more)

Non-Malleable Codes

We introduce the notion of “non-malleable codes” which relaxes the notion of error correction and error detection. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. In contrast to error correction and error detection, non-malleability... (more)

Equivalence of Deterministic Top-Down Tree-to-String Transducers Is Decidable

We prove that equivalence of deterministic top-down tree-to-string transducers is decidable, thus solving a long-standing open problem in formal... (more)

Threesomes, Degenerates, and Love Triangles

The 3SUM problem is to decide, given a set of n real numbers, whether any three sum to zero. It is widely conjectured that a trivial O(n2)-time algorithm is optimal on the Real RAM, and optimal even in the nonuniform linear decision tree model. Over the years the consequences of this conjecture have been revealed. This 3SUM conjecture implies... (more)

NEWS

About JACM

The Journal of the ACM (JACM) provides coverage of the most significant work on principles of computer science, broadly construed. The scope of research we cover encompasses contributions of lasting value to any area of computer science. To be accepted, a paper must be judged to be truly outstanding in its field.  JACM is interested  in work in core computer science and at the boundaries, both the boundaries of subdisciplines of computer science and the boundaries between computer science and other fields.  READ MORE

Editorial Process

The Journal of the ACM begins the refereeing process with a "quick review", to check whether the manuscript has a plausible chance of meeting JACM's high standards, even if all the claimed results are correct. JACM tries to cover a broad spectrum of areas, and can only accept 4-5 papers in any given area every year. Thus, we try to focus on the most significant papers in each area, that would be of interest to the broad community, and reject many papers that would be accepted by other journals. READ MORE

Important Note on P/NP

Some submissions purport to solve a long-standing open problem in complexity theory, such as the P/NP problem. Many of these turn out to be mistaken, and such submissions tax JACM volunteer editors and reviewers.  READ MORE

Forthcoming Articles
Decremental Single-Source Shortest Paths on Undirected Graphs in Near-Linear Total Update Time

In the decremental single-source shortest paths (SSSP) problem we want to maintain the distances between a given source node s and every other node in an n-node m-edge graph G undergoing edge deletions. While its static counterpart can be solved in near-linear time, this decremental problem is much more challenging even in the undirected unweighted case. In this case, the classic O(mn) total update time of Even and Shiloach [1981] has been the fastest known algorithm for three decades. At the cost of a (1+µ)-approximation factor, the running time was recently improved to n^{2+o(1)} by Bernstein and Roditty [2011]. In this paper, we bring the running time down to near-linear: We give a (1+µ)-approximation algorithm with m^{1+o(1)} total update time, thus obtaining near-linear time. Moreover, we obtain m^{1+o(1)}log W time for the weighted case, where the edge weights are integers from 1 to W. The only prior work on weighted graphs in o(mn) time is the mn^{0.9 + o(1)}-time algorithm by Henzinger et al. [2014,2015] which works for directed graphs with quasi-polynomial edge weights. The expected running time bound of our algorithm holds against an oblivious adversary. In contrast to the previous results which rely on maintaining a sparse emulator, our algorithm relies on maintaining a so-called sparse (h,µ)-hop set introduced by Cohen [2000] in the PRAM literature. An (h,µ)-hop set of a graph G=(V, E) is a set F of weighted edges such that the distance between any pair of nodes in G can be (1+µ)-approximated by their h-hop distance (given by a path containing at most h edges) on G'=(V, E*F). Our algorithm can maintain an (n^{o(1)}, µ)-hop set of near-linear size in near-linear time under edge deletions. It is the first of its kind to the best of our knowledge. To maintain approximate distances using this hop set, we extend the monotone Even-Shiloach tree of Henzinger et al. [2013] and combine it with the bounded-hop SSSP technique of Bernstein [2009; 2013] and Mdry [2010]. These two new tools might be of independent interest.

Ontology-Mediated Queries: Combined Complexity and Succinctness of Rewritings via Circuit Complexity

We give solutions to two fundamental computational problems in ontology-based data access with the W3C standard ontology language OWL2QL: the succinctness problem for first-order rewritings of ontology-mediated queries (OMQs), and the complexity problem for OMQ answering. We classify OMQs according to the shape of their conjunctive queries (treewidth, the number of leaves) and the existential depth of their ontologies. For each of these classes, we determine the combined complexity of OMQ answering, and whether all OMQs in the class have polynomial-size first-order, positive existential, and nonrecursive datalog rewritings. We obtain the succinctness results using hypergraph programs, a new computational model for Boolean functions, which makes it possible to connect the size of OMQ rewritings and circuit complexity.

Erratum: Limited-Use Atomic Snapshots with Polylogarithmic Step Complexity

This is an erratum for the paper ``Limited-Use Atomic Snapshots with Polylogarithmic Step Complexity" published in J. ACM 62(1): 3:1-3:22 (2015). The implementation of a $\MaxArray_{k \times h}$ object in Algorithm 2 does not guarantee linearizability. We give here a simple correction to the algorithm and its correctness proof.

Pseudorandomness from Shrinkage

One powerful theme in complexity theory and pseudorandomness in the past few decades has been the use of lower bounds to give pseudorandom generators (PRGs). However, the general results using this hardness vs. randomness paradigm suffer a quantitative loss in parameters, and hence do not give nontrivial implications for models where we dont know super-polynomial lower bounds but do know lower bounds of a fixed polynomial. We show that when such lower bounds are proved using random restrictions, we can construct PRGs which are essentially best possible without in turn improving the lower bounds. More specifically, say that a circuit family has shrinkage exponent “ if a random restriction leaving a p fraction of variables unset shrinks the size of any circuit in the family by a factor of p“+o(1). Our PRG uses a seed of length s1/(“+1)+o(1) to fool circuits in the family of size s. By using this generic construction, we get PRGs with polynomially small error for the following classes of circuits of size s and with the following seed lengths: 1. For de Morgan formulas, seed length s1/3+o(1); 2. For formulas over an arbitrary basis, seed length s1/2+o(1); 3. For read-once de Morgan formulas, seed length s.234...; 4. For branching programs of size s, seed length s1/2+o(1). The previous best PRGs known for these classes used seeds of length bigger than n/2 to output n bits, and worked only for size s = O(n) (Bogdanov, Papakonstantinou, & Wan).

Shuffles and Circuits (On Lower Bounds for Modern Parallel Computation)

The goal of this paper is to identify fundamental limitations on how efficiently algorithms implemented on platforms such as MapReduce and Hadoop can compute the central problems in the motivating application domains, such as graph connectivity problems. We introduce an abstract model of massively parallel computation, where essentially the only restrictions are that the fan-in of each machine is limited to s bits, where s is smaller than the input size n, and that computation proceeds in synchronized rounds, with no communication between different ma- chines within a round. Lower bounds on the round complexity of a problem in this model apply to every computing platform that shares the most basic design principles of MapReduce-type systems. We prove that computations in our model that use few rounds can be represented as low-degree polynomials over the reals. This connection allows us to translate a lower bound on the (approximate) polynomial degree of a Boolean function to a lower bound on the round complexity of every (randomized) massively parallel computation of that function. These lower bounds apply even in the unbounded width version of our model, where the number of machines can be arbitrarily large. As one example of our general results, computing any non-trivial monotone graph property  such as connectivity  requires a super-constant number of rounds when every machine can accept only a sub-polynomial (in n) number of input bits s. Finally, we prove that, in two senses, our lower bounds are the best one could hope for. For the unbounded-width model, we prove a matching upper bound. Restricting to a polynomial number of machines, we show that asymptotically better lower bounds would separate P from NC1.

Reachability is in DynFO

Patnaik and Immerman introduced the dynamic complexity class DynFO of database queries that can be maintained by first-order dy- namic programs with the help of auxiliary relations under insertions and deletions of edges [34]. This article confirms their conjecture that the Reachability query is in DynFO. As a byproduct it is shown that the rank of a matrix with small values can be maintained in DynFO. It is further shown that the (size of the) maximum matching of a graph can be maintained in non-uniform DynFO, another extension of DynFO, with non-uniform initialisation of the auxiliary relations.

General Belief Revision

In Artificial Intelligence, a key question concerns how an agent may rationally revise its beliefs in light of new information. The standard (AGM) approach to belief revision assumes that the underlying logic contains classical propositional logic. This is a significant limitation, since many representation schemes in AI don't subsume propositional logic. In this paper we consider the question of what the minimal requirements are on a logic, such that the AGM approach to revision may be formulated. We show that AGM-style revision can be obtained even when extremely little is assumed of the underlying language and its semantics; in fact, one requires little more than a language with sentences that are satisfied at models, or possible worlds. The classical AGM postulates are expressed in this framework and a representation result is established between the postulate set and certain preorders on possible worlds. To obtain the representation result, we add a new postulate to the AGM postulates, and we add a constraint to preorders on worlds. Crucially, both of these additions are redundant in the original AGM framework, and so we extend, rather than modify, the AGM approach. As well, iterated revision is addressed and shown to be compatible with our approach. Various examples are given to illustrate the approach, including Horn clause revision, revision in extended logic programs, and belief revision in a very basic logic called literal revision.

Solving Optimization Problems with Diseconomies of Scale via Decoupling

We present a new framework for solving optimization problems with a diseconomy of scale. In such problems, our goal is to minimize the cost of resources used to perform a certain task. The cost of resources grows superlinearly, as x^q, q > 1, with the amount x of resources used. We define a novel linear programming relaxation for such problems, and then show that the integrality gap of the relaxation is A_q, where A_q is the q-th moment of the Poisson random variable with parameter 1. Using our framework, we obtain approximation algorithms for the Minimum Energy Efficient Routing, Minimum Degree Balanced Spanning Tree, Load Balancing on Unrelated Parallel Machines, and Unrelated Parallel Machine Scheduling with Nonlinear Functions of Completion Times problems. Our analysis relies on the decoupling inequality for nonnegative random variables. The inequality states that |x_1 + ... + X_n|_q < C_q |Y_1 + ... + Y_n|_q, where X_i are independent nonnegative random variables, Y_i are possibly dependent nonnegative random variable, and each Y_i has the same distribution as X_i. The inequality was proved by de la Pena in 1990. De la Pena, Ibragimov, and Sharakhmetov showed that C_q <= 2 for q in (1,2] and C_q <= A_q^{1/q} for q >= 2. We show that the optimal constant is C_q=A_q^{1/q} for any q >= 1. We then prove a more general inequality for arbitrary convex functions.

Weakest Precondition Reasoning for Expected Runtimes of Randomized Algorithms

This paper presents a wp-style calculus for obtaining bounds on the expected runtime of randomized algorithms. Its application includes determining the (possibly infinite) expected termination time of a randomized algorithm and proving positive almost-sure termination---does a program terminate with probability one in finite expected time? We provide several proof rules for bounding the runtime of loops, and prove the soundness of the approach with respect to a simple operational model. We show that our approach is a conservative extension of Nielson's approach for reasoning about the runtime of deterministic programs. We analyze the expected runtime of some example programs including the coupon collector's problem, a one--dimensional random walk and a randomized binary search.

Unifying Concurrent Objects and Distributed Tasks: Interval-linearizability

Tasks and objects are two predominant ways of specifying distributed problems where processes should compute outputs based on their inputs. Roughly speaking, a task specifies, for each set of processes and each possible assignment of input values, their valid outputs. In contrast, an object is defined by a sequential specification. Also, an object can be invoked multiple times by each process, while a task is a one-shot problem. Each one requires its own implementation notion, stating when an execution satisfies the specification. For objects linearizability is commonly used, while tasks implementation notions are less explored. The paper introduces the notion of interval-sequential object, and the corresponding implementation notion of interval-linearizability, to encompass many problems that have no sequential specification as objects. It is shown that interval-sequential specifications are local, namely, one can consider interval-linearizable object implementations in isolation and compose them for free, without sacrificing interval-linearizability of the whole system. The paper also introduces the notion of refined tasks and its corresponding satisfiability notion. In contrast to a task, a refined task can be invoked multiple times by each process. Also, objects that cannot be defined using tasks, can be defined using refined tasks. In fact, a main result of the paper is that interval-sequential objects and refined tasks, have the same expressive power. Interval-linearizability goes beyond unifying objects and tasks, it sheds new light on both of them. On the one hand, brings to tasks the following benefits: an explicit operational semantics, a more precise implementation notion, a notion of state, and a locality property. On the other hand, refined tasks open new possibilities of applying topological techniques to objects.

On algebraic branching programs of small width

In 1979 Valiant showed that the complexity class VP_e of families with polynomially bounded formula size is contained in the class VP_s of families that have algebraic branching programs (ABPs) of polynomially bounded size. Motivated by the problem of separating these classes we study the topological closure VP_e-bar, i.e. the class of polynomials that can be approximated arbitrarily closely by polynomials in VP_e. We describe VP_e-bar with a strikingly simple complete polynomial (in characteristic different from 2) whose recursive definition is similar to the Fibonacci numbers. Further understanding this polynomial seems to be a promising route to new formula lower bounds. Our methods are rooted in the study of ABPs of small constant width. In 1992 Ben-Or and Cleve showed that formula size is polynomially equivalent to width-3 ABP size. We extend their result (in characteristic different from 2) by showing that approximate formula size is polynomially equivalent to approximate width-2 ABP size. This is surprising because in 2011 Allender and Wang gave explicit polynomials that cannot be computed by width-2 ABPs at all! The details of our construction lead to the aforementioned characterization of VP_e-bar. As a natural continuation of this work we prove that the class VNP can be described as the class of families that admit a hypercube summation of polynomially bounded dimension over a product of polynomially many affine linear forms. This gives the first separations of algebraic complexity classes from their nondeterministic analogs.

The Cost of Unknown Diameter in Dynamic Networks

For dynamic networks with {\em unknown diameter}, we prove novel lower bounds on the time complexity of a range of basic distributed computing problems. Together with trivial upper bounds under dynamic networks with {\em known diameter} for these problems, our lower bounds show that the complexities of all these problems are {\em sensitive} to whether the diameter is known to the protocol beforehand: Not knowing the diameter increases the time complexities by a large poly$(N)$ factor as compared to when the diameter is known, resulting in an exponential gap. Our lower bounds are obtained via communication complexity arguments and by reducing from the two-party {\sc DisjointnessCP} problem. We further prove that sometimes this large poly$(N)$ cost can be completely avoided if the protocol is given a good estimate on $N$. In other words, having such an estimate makes some problems no longer sensitive to unknown diameter.

Invited Article Foreword for 65.6

Settling the query complexity of non-adaptive junta testing

We prove that any non-adaptive algorithm that tests whether an unknown Boolean function f:{0,1}n’{0,1} is a k-junta or µ-far from every k-junta must make ©(k3/2/µ) many queries for a wide range of parameters k and µ. Our result dramatically improves previous lower bounds from [BGSMdW13, STW15], and is essentially optimal given Blais's non-adaptive junta tester from [Bla08], which makes O(k3/2)/µ queries. Combined with the adaptive tester of [Bla09] which makes O(k log k + k/µ) queries, our result shows that adaptivity enables polynomial savings in query complexity for junta testing.

The PCL Theorem. Transactions cannot be Parallel, Consistent and Live.

We establish the PCL theorem which states that it is impossible to design a transactional memory algorithm which ensures (1) parallelism, i.e. transactions do not need to synchronize unless they access the same application objects, (2) very little consistency, i.e. a consistency condition, called weak adaptive consistency, introduced here and which is weaker than snapshot isolation, processor consistency, and any other consistency condition stronger than them (such as opacity, serializability, causal serializability, etc.), and (3) very little liveness, i.e. that transactions eventually commit if they run solo.

Engineering with Logic: Rigorous Test-Oracle Specification and Validation for TCP/IP and the Sockets API

Conventional computer engineering relies on test-and-debug development processes, with the behaviour of common interfaces described (at best) with prose specification documents. But prose specifications cannot be used in test-and-debug development in any automated way, and prose is a poor medium for expressing complex (and loose) specifications. The TCP/IP protocols and Sockets API are a good example of this: they play a vital role in modern communication and computation, and interoperability between implementations is essential. But what exactly they are is surprisingly obscure: their original development focussed on ``rough consensus and running code'', augmented by prose RFC specifications that do not precisely define what it means for an implementation to be correct. Ultimately, the actual standard is the de facto one of the common implementations, including, for example, the 15\,000--20\,000 lines of the BSD implementation --- optimised and multithreaded C code, time-dependent, with asynchronous event handlers, intertwined with the operating system, and security-critical. This paper reports on work done in the Netsem project to develop lightweight mathematically rigorous techniques that can be applied to such systems: to specify their behaviour precisely (but loosely enough to permit the required implementation variation) and to test whether these specifications and the implementations correspond, with specifications that are executable as test oracles. We developed post-hoc specifications of TCP, UDP, and the Sockets API, both of the service that they provide to applications (in terms of TCP bidirectional stream connections), and of the internal operation of the protocol (in terms of TCP segments and UDP datagrams), together with a testable abstraction function relating the two. These specifications are rigorous, detailed, readable, with broad coverage, and are rather accurate. Working within a general-purpose proof assistant (HOL4), we developed language idioms (within higher-order logic) in which to write the specifications: operational semantics with nondeterminism, time, system calls, monadic relational programming, etc. We followed an experimental semantics approach, validating the specifications against several thousand traces captured from three implementations (FreeBSD, Linux, and WinXP). Many differences between these were identified, and a number of bugs. Validation was done using a special-purpose symbolic model checker programmed above HOL4. Having demonstrated that our logic-based engineering techniques suffice for handling real-world protocols, we argue that similar techniques could be applied to future critical software infrastructure at design time, leading to cleaner designs and (via specification-based testing) more robust and predictable implementations. In cases where specification looseness can be controlled, this should be possible with lightweight techniques, without the need for a general-purpose proof assistant, at relatively little cost.

Indistinguishability Obfuscation from Functional Encryption

Indistinguishability obfuscation (IO) is a tremendous notion, powerful enough to give rise to almost any known cryptographic object. Prior candidate IO constructions were based on specific assumptions on algebraic objects called multi-linear graded encodings. We present a generic construction of indistinguishability obfuscation from public-key functional encryption with succinct encryption circuits and subexponential security. This shows the equivalence of indistinguishability obfuscation and public-key functional encryption, a primitive that has so far seemed to be much weaker, lacking the power and the staggering range of applications of indistinguishability obfuscation. Our main construction can be based on functional encryption schemes that support a {\em single function key}, and where the encryption circuit grows sub-linearly in the circuit-size of the function. We further show that sublinear succinctness in circuit-size for single-key schemes can be traded with sublinear succinctness in the number of keys (also known as the {\em collusion-size}) for multi-key schemes. As a consequence, we obtain a new candidate IO construction based on the functional encryption scheme of Garg, Gentry, Halevi and Zhandry (TCC'16) under their assumptions on multi-linear graded encodings. We also show that, under the Learning with Errors assumption, our techniques imply that any indistinguishability obfuscator can be converted into one where the size of obfuscated circuits is twice that of the original circuit plus an additive overhead that is polynomial in its depth, input length, and the security parameter. Our reduction highlights the importance of succinctness in functional encryption schemes, which we hope will serve as a pathway to new IO constructions based on solid cryptographic foundations.

Matroid Secretary Problems

We define a generalization of the classical secretary problem called the matroid secretary problem. In this problem, the elements of a matroid are presented to an online algorithm in uniformly randomorder. When an element arrives, the algorithm observes its value and must make an irrevocable decision whether or not to accept it. The accepted elements must form an independent set, and the objective is to maximize the combined value of these elements.We present an O(log k)-competitive algorithm for general matroids (where k is the rank of the matroid), and constant-competitive algorithms for several special cases including graphic matroids, truncated partition matroids, and bounded degree transversal matroids. We leave as an open question the existence of constant-competitive algorithms for general matroids. Our results have applications in welfare maximizing online mechanism design for domains in which the sets of simultaneously satisfiable agents form a matroid.

Circuit complexity, proof complexity, and polynomial identity testing: The ideal proof system

We introduce a new and natural algebraic proof system, whose complexity measure is essentially the algebraic circuit size of Nullstellensatz certificates. This enables us to exhibit close connections between effective Nullstellensatzë, proof complexity, and (algebraic) circuit complexity. In particular, we show that any super-polynomial lower bound on any Boolean tautology in our proof system implies that the permanent does not have polynomial-size algebraic circuits (VNP ` VP). We also show that super-polynomial lower bounds on the number of lines in Polynomial Calculus proofs imply the Permanent versus Determinant Conjecture. Note that there was no proof system prior to ours for which lower bounds on an arbitrary tautology implied any computational lower bound. Our proof system helps clarify the relationships between previous algebraic proof systems, and begins to shed light on why proof complexity lower bounds for various proof systems have been so much harder than lower bounds on the corresponding circuit classes. In doing so, we highlight the importance of polynomial identity testing (PIT) in proof complexity. In particular, we use PIT to illuminate AC0[p]-Frege lower bounds, which have been open for nearly 30 years, with no satisfactory explanation as to their apparent difficulty. Finally, we explain the obstacles that must be overcome in any attempt to extend techniques from algebraic circuit complexity to prove lower bounds in proof complexity. Using the algebraic structure of our proof system, we propose a novel route to such lower bounds. Although such lower bounds remain elusive, this proposal should be contrasted with the difficulty of extending AC0[p] circuit lower bounds to AC0[p]-Frege lower bounds.

Parallel Metric Tree Embedding based on an Algebraic View on Moore-Bellman-Ford

A metric tree embedding of expected stretch ± e 1 maps a weighted n-node graph G = (V, E, É) to a weighted tree T = (V_T, E_T , É_T) with V † V_T such that, for all v,w  V, dist(v, w, G) d dist(v, w, T) and E[dist(v, w, T)] d ± dist(v, w, G). Such embeddings are highly useful for designing fast approximation algorithms, as many hard problems are easy to solve on tree instances. However, to date the best parallel (polylog n)-depth algorithm that achieves an asymptotically optimal expected stretch of ±  O(log n) requires ©(n^2) work and a metric as input. In this paper, we show how to achieve the same guarantees using polylog n depth and weak-O(m^(1+µ)) work, where m = |E| and µ > 0 is an arbitrarily small constant. Moreover, one may further reduce the work to weak-O(m + n^(1+µ)) at the expense of increasing the expected stretch to O(µ^(-1) log n). Our main tool in deriving these parallel algorithms is an algebraic characterization of a generalization of the classic Moore-Bellman-Ford algorithm. We consider this framework, which subsumes a variety of previous ``Moore-Bellman-Ford-like'' algorithms, to be of independent interest and discuss it in depth. In our tree embedding algorithm, we leverage it for providing efficient query access to an approximate metric that allows sampling the tree using polylog n depth and weak-O(m) work. We illustrate the generality and versatility of our techniques by various examples and a number of additional results. Specifically, we - improve the state of the art for determining metric tree embeddings in the Congest model, - determine a (1 + µ')-approximate metric regarding the distances in a graph G in polylogarithmic depth and weak-O(nm^(1+µ)) work, and - improve upon the state of the art regarding the k-median and the the buy-at-bulk network design problems.

Near-Optimal Lower Bounds on Quantifier Depth and Weisfeiler-Leman Refinement Steps

We prove near-optimal trade-offs for quantifier depth versus number of variables in first-order logic by exhibiting pairs of n-element structures that can be distinguished by a k-variable first-order sentence but where every such sentence requires quantifier depth at least n©(k/logk). Our trade-offs also apply to first-order counting logic, and by the known connection to the k-dimensional Weisfeiler-Leman algorithm imply near-optimal lower bounds on the number of refinement iterations. A key component in our proof is the hardness condensation technique recently introduced by [Razborov '16] in the context of proof complexity. We apply this method to reduce the domain size of relational structures while maintaining the minimal quantifier depth to distinguish them in finite variable logics.

The Parameterized Complexity of k-Biclique Problem

Given a graph $G$ and an integer $k$, the $\kbiclique$ problem asks whether $G$ contains a complete bipartite subgraph with $k$ vertices on its each side. Whether there is an $f(k)\cdot |G|^{O(1)}$-time algorithm solving $\kbiclique$ for some computable function $f$ has been a longstanding open problem. We show that such an algorithm is unlikely to exist under a hypothesis from parameterized complexity theory. To prove this result, we give a reduction that, on input an $n$-vertex graph $G$ and a small integer $k$, constructs a bipartite graph $H=(L\;\dot\cup\; R,E)$ in time polynomial in $n$ such that if $G$ contains a clique with $k$ vertices, then there are $k(k-1)/2$ vertices in $L$ with $n^{\Theta(1/k)}$ common neighbors, otherwise any $k(k-1)/2$ vertices in $L$ have at most $(k+1)!$ common neighbors. An additional feature of this reduction is that it creates a gap on the right side of the biclique. Such a gap might have further applications on proving hardness of approximation results. Assuming a randomized version of Exponential Time Hypothesis, we establish an $f(k)\cdot |G|^{o(\sqrt{k})}$-time lower bound for $\kbiclique$ for any computable function $f$. Combining our result with the work of [Bulatov and Marx 2014],] we obtain a dichotomy classification of the parameterized complexity of cardinality constraint satisfaction problems.

All ACM Journals | See Full Journal Index

Search JACM
enter search term and/or author name